Sample records for movement facial expression

  1. Effects of damping head movement and facial expression in dyadic conversation using real–time facial expression tracking and synthesized avatars

    PubMed Central

    Boker, Steven M.; Cohn, Jeffrey F.; Theobald, Barry-John; Matthews, Iain; Brick, Timothy R.; Spies, Jeffrey R.

    2009-01-01

    When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others' head movements and facial expressions. We present an experiment in which confederates' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements. PMID:19884143

  2. Posed versus spontaneous facial expressions are modulated by opposite cerebral hemispheres.

    PubMed

    Ross, Elliott D; Pulusu, Vinay K

    2013-05-01

    Clinical research has indicated that the left face is more expressive than the right face, suggesting that modulation of facial expressions is lateralized to the right hemisphere. The findings, however, are controversial because the results explain, on average, approximately 4% of the data variance. Using high-speed videography, we sought to determine if movement-onset asymmetry was a more powerful research paradigm than terminal movement asymmetry. The results were very robust, explaining up to 70% of the data variance. Posed expressions began overwhelmingly on the right face whereas spontaneous expressions began overwhelmingly on the left face. This dichotomy was most robust for upper facial expressions. In addition, movement-onset asymmetries did not predict terminal movement asymmetries, which were not significantly lateralized. The results support recent neuroanatomic observations that upper versus lower facial movements have different forebrain motor representations and recent behavioral constructs that posed versus spontaneous facial expressions are modulated preferentially by opposite cerebral hemispheres and that spontaneous facial expressions are graded rather than non-graded movements. Published by Elsevier Ltd.

  3. Do facial movements express emotions or communicate motives?

    PubMed

    Parkinson, Brian

    2005-01-01

    This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience "inhibition" effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular, emotion is not a sufficient condition for a corresponding "expression," even discounting explicit regulation, and, apparently, "spontaneous" facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.

  4. Hereditary family signature of facial expression

    PubMed Central

    Peleg, Gili; Katzir, Gadi; Peleg, Ofer; Kamara, Michal; Brodsky, Leonid; Hel-Or, Hagit; Keren, Daniel; Nevo, Eviatar

    2006-01-01

    Although facial expressions of emotion are universal, individual differences create a facial expression “signature” for each person; but, is there a unique family facial expression signature? Only a few family studies on the heredity of facial expressions have been performed, none of which compared the gestalt of movements in various emotional states; they compared only a few movements in one or two emotional states. No studies, to our knowledge, have compared movements of congenitally blind subjects with their relatives to our knowledge. Using two types of analyses, we show a correlation between movements of congenitally blind subjects with those of their relatives in think-concentrate, sadness, anger, disgust, joy, and surprise and provide evidence for a unique family facial expression signature. In the analysis “in-out family test,” a particular movement was compared each time across subjects. Results show that the frequency of occurrence of a movement of a congenitally blind subject in his family is significantly higher than that outside of his family in think-concentrate, sadness, and anger. In the analysis “the classification test,” in which congenitally blind subjects were classified to their families according to the gestalt of movements, results show 80% correct classification over the entire interview and 75% in anger. Analysis of the movements' frequencies in anger revealed a correlation between the movements' frequencies of congenitally blind individuals and those of their relatives. This study anticipates discovering genes that influence facial expressions, understanding their evolutionary significance, and elucidating repair mechanisms for syndromes lacking facial expression, such as autism. PMID:17043232

  5. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  6. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  7. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    PubMed

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  9. A Quantitative Assessment of Lip Movements in Different Facial Expressions Through 3-Dimensional on 3-Dimensional Superimposition: A Cross-Sectional Study.

    PubMed

    Gibelli, Daniele; Codari, Marina; Pucciarelli, Valentina; Dolci, Claudia; Sforza, Chiarella

    2017-11-23

    The quantitative assessment of facial modifications from mimicry is of relevant interest for the rehabilitation of patients who can no longer produce facial expressions. This study investigated a novel application of 3-dimensional on 3-dimensional superimposition for facial mimicry. This cross-sectional study was based on 10 men 30 to 40 years old who underwent stereophotogrammetry for neutral, happy, sad, and angry expressions. Registration of facial expressions on the neutral expression was performed. Root mean square (RMS) point-to-point distance in the labial area was calculated between each facial expression and the neutral one and was considered the main parameter for assessing facial modifications. In addition, effect size (Cohen d) was calculated to assess the effects of labial movements in relation to facial modifications. All participants were free from possible facial deformities, pathologies, or trauma that could affect facial mimicry. RMS values of facial areas differed significantly among facial expressions (P = .0004 by Friedman test). The widest modifications of the lips were observed in happy expressions (RMS, 4.06 mm; standard deviation [SD], 1.14 mm), with a statistically relevant difference compared with the sad (RMS, 1.42 mm; SD, 1.15 mm) and angry (RMS, 0.76 mm; SD, 0.45 mm) expressions. The effect size of labial versus total face movements was limited for happy and sad expressions and large for the angry expression. This study found that a happy expression provides wider modifications of the lips than the other facial expressions and suggests a novel procedure for assessing regional changes from mimicry. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Singing emotionally: a study of pre-production, production, and post-production facial expressions

    PubMed Central

    Quinto, Lena R.; Thompson, William F.; Kroos, Christian; Palmer, Caroline

    2014-01-01

    Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as “neutral.” An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization. PMID:24808868

  11. Common cues to emotion in the dynamic facial expressions of speech and song

    PubMed Central

    Livingstone, Steven R.; Thompson, William F.; Wanderley, Marcelo M.; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. PMID:25424388

  12. Common cues to emotion in the dynamic facial expressions of speech and song.

    PubMed

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  13. Facial expressions as a model to test the role of the sensorimotor system in the visual perception of the actions.

    PubMed

    Mele, Sonia; Ghirardi, Valentina; Craighero, Laila

    2017-12-01

    A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.

  14. Imitating expressions: emotion-specific neural substrates in facial mimicry.

    PubMed

    Lee, Tien-Wen; Josephs, Oliver; Dolan, Raymond J; Critchley, Hugo D

    2006-09-01

    Intentionally adopting a discrete emotional facial expression can modulate the subjective feelings corresponding to that emotion; however, the underlying neural mechanism is poorly understood. We therefore used functional brain imaging (functional magnetic resonance imaging) to examine brain activity during intentional mimicry of emotional and non-emotional facial expressions and relate regional responses to the magnitude of expression-induced facial movement. Eighteen healthy subjects were scanned while imitating video clips depicting three emotional (sad, angry, happy), and two 'ingestive' (chewing and licking) facial expressions. Simultaneously, facial movement was monitored from displacement of fiducial markers (highly reflective dots) on each subject's face. Imitating emotional expressions enhanced activity within right inferior prefrontal cortex. This pattern was absent during passive viewing conditions. Moreover, the magnitude of facial movement during emotion-imitation predicted responses within right insula and motor/premotor cortices. Enhanced activity in ventromedial prefrontal cortex and frontal pole was observed during imitation of anger, in ventromedial prefrontal and rostral anterior cingulate during imitation of sadness and in striatal, amygdala and occipitotemporal during imitation of happiness. Our findings suggest a central role for right inferior frontal gyrus in the intentional imitation of emotional expressions. Further, by entering metrics for facial muscular change into analysis of brain imaging data, we highlight shared and discrete neural substrates supporting affective, action and social consequences of somatomotor emotional expression.

  15. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  16. Recovery of facial expressions using functional electrical stimulation after full-face transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Çolak, Ömer Halil

    2018-03-06

    We assessed the recovery of 2 face transplantation patients with measures of complexity during neuromuscular rehabilitation. Cognitive rehabilitation methods and functional electrical stimulation were used to improve facial emotional expressions of full-face transplantation patients for 5 months. Rehabilitation and analyses were conducted at approximately 3 years after full facial transplantation in the patient group. We report complexity analysis of surface electromyography signals of these two patients in comparison to the results of 10 healthy individuals. Facial surface electromyography data were collected during 6 basic emotional expressions and 4 primary facial movements from 2 full-face transplantation patients and 10 healthy individuals to determine a strategy of functional electrical stimulation and understand the mechanisms of rehabilitation. A new personalized rehabilitation technique was developed using the wavelet packet method. Rehabilitation sessions were applied twice a month for 5 months. Subsequently, motor and functional progress was assessed by comparing the fuzzy entropy of surface electromyography data against the results obtained from patients before rehabilitation and the mean results obtained from 10 healthy subjects. At the end of personalized rehabilitation, the patient group showed improvements in their facial symmetry and their ability to perform basic facial expressions and primary facial movements. Similarity in the pattern of fuzzy entropy for facial expressions between the patient group and healthy individuals increased. Synkinesis was detected during primary facial movements in the patient group, and one patient showed synkinesis during the happiness expression. Synkinesis in the lower face region of one of the patients was eliminated for the lid tightening movement. The recovery of emotional expressions after personalized rehabilitation was satisfactory to the patients. The assessment with complexity analysis of sEMG data can be used for developing new neurorehabilitation techniques and detecting synkinesis after full-face transplantation.

  17. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  18. Guide to Understanding Facial Palsy

    MedlinePlus

    ... to many different facial muscles. These muscles control facial expression. The coordinated activity of this nerve and these ... involves a weakness of the muscles responsible for facial expression and side-to-side eye movement. Moebius syndrome ...

  19. Facial movements strategically camouflage involuntary social signals of face morphology.

    PubMed

    Gill, Daniel; Garrod, Oliver G B; Jack, Rachael E; Schyns, Philippe G

    2014-05-01

    Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.

  20. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  1. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    PubMed

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.

  2. Analysis of Facial Expression by Taste Stimulation

    NASA Astrophysics Data System (ADS)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  3. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  4. Real-time speech-driven animation of expressive talking faces

    NASA Astrophysics Data System (ADS)

    Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli

    2011-05-01

    In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.

  5. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  6. Adult Perceptions of Positive and Negative Infant Emotional Expressions

    ERIC Educational Resources Information Center

    Bolzani Dinehart, Laura H.; Messinger, Daniel S.; Acosta, Susan I.; Cassel, Tricia; Ambadar, Zara; Cohn, Jeffrey

    2005-01-01

    Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety-five college students rated a series of naturally occurring and…

  7. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  8. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  9. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  10. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved

  11. Objectively measuring pain using facial expression: is the technology finally ready?

    PubMed

    Dawes, Thomas Richard; Eden-Green, Ben; Rosten, Claire; Giles, Julian; Governo, Ricardo; Marcelline, Francesca; Nduka, Charles

    2018-03-01

    Currently, clinicians observe pain-related behaviors and use patient self-report measures in order to determine pain severity. This paper reviews the evidence when facial expression is used as a measure of pain. We review the literature reporting the relevance of facial expression as a diagnostic measure, which facial movements are indicative of pain, and whether such movements can be reliably used to measure pain. We conclude that although the technology for objective pain measurement is not yet ready for use in clinical settings, the potential benefits to patients in improved pain management, combined with the advances being made in sensor technology and artificial intelligence, provide opportunities for research and innovation.

  12. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  13. Bodily Movement and Facial Actions in Expressive Musical Performance by Solo and Duo Instrumentalists: Two Distinctive Case Studies

    ERIC Educational Resources Information Center

    Davidson, Jane W.

    2012-01-01

    The research literature concerning gesture in musical performance increasingly reports that musically communicative and meaningful performances contain highly expressive bodily movements. These movements are involved in the generation of the musically expressive performance, but enquiry into the development of expressive bodily movement has been…

  14. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    PubMed Central

    Saneiro, Mar; Salmeron-Majadas, Sergio

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support. PMID:24892055

  15. Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches.

    PubMed

    Saneiro, Mar; Santos, Olga C; Salmeron-Majadas, Sergio; Boticario, Jesus G

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  16. Dissociation between facial and bodily expressions in emotion recognition: A case study.

    PubMed

    Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo

    2017-12-21

    Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.

  17. INFRARED- BASED BLINK DETECTING GLASSES FOR FACIAL PACING: TOWARDS A BIONIC BLINK

    PubMed Central

    Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T

    2015-01-01

    IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step towards reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN Standard safety glasses were equipped with an infrared (IR) emitter/detector pair oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed. SETTING Tertiary care Facial Nerve Center. PARTICIPANTS 24 healthy volunteers. MAIN OUTCOME MEASURE Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted gaze from central to far peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze, but generated false-detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related lid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6.3% of the time during lateral eye movements, 10.4% during upward movements, 46.5% during downward movements, and 5.6% for movements from an upward or downward gaze back to the primary gaze. Facial expressions disrupted sensor output if they caused substantial squinting or shifted the glasses. CONCLUSION AND RELEVANCE Our blink detection system provides a reliable, non-invasive indication of eyelid closure using an invisible light beam passing in front of the eye. Future versions will aim to mitigate detection errors by using multiple IR emitter/detector pairs mounted on the glasses, and alternative frame designs may reduce shifting of the sensors relative to the eye during facial movements. PMID:24699708

  18. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  19. Living with Moebius syndrome: adjustment, social competence, and satisfaction with life.

    PubMed

    Bogart, Kathleen Rives; Matsumoto, David

    2010-03-01

    Moebius syndrome is a rare congenital condition that results in bilateral facial paralysis. Several studies have reported social interaction and adjustment problems in people with Moebius syndrome and other facial movement disorders, presumably resulting from lack of facial expression. To determine whether adults with Moebius syndrome experience increased anxiety and depression and/or decreased social competence and satisfaction with life compared with people without facial movement disorders. Internet-based quasi-experimental study with comparison group. Thirty-seven adults with Moebius syndrome recruited through the United States-based Moebius Syndrome Foundation newsletter and Web site and 37 age- and gender-matched control participants recruited through a university participant database. Anxiety and depression, social competence, satisfaction with life, ability to express emotion facially, and questions about Moebius syndrome symptoms. People with Moebius syndrome reported significantly lower social competence than the matched control group and normative data but did not differ significantly from the control group or norms in anxiety, depression, or satisfaction with life. In people with Moebius syndrome, degree of facial expression impairment was not significantly related to the adjustment variables. Many people with Moebius syndrome are better adjusted than previous research suggests, despite their difficulties with social interaction. To enhance interaction, people with Moebius syndrome could compensate for the lack of facial expression with alternative expressive channels.

  20. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions.

  1. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions. PMID:27630604

  2. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  3. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    NASA Astrophysics Data System (ADS)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  4. Mapping and Manipulating Facial Expression

    ERIC Educational Resources Information Center

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  5. Discovering cultural differences (and similarities) in facial expressions of emotion.

    PubMed

    Chen, Chaona; Jack, Rachael E

    2017-10-01

    Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g. nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g. the 'fear' face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    PubMed

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Mimicking emotions: how 3-12-month-old infants use the facial expressions and eyes of a model.

    PubMed

    Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves

    2018-06-01

    While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.

  8. Tactile Stimulation of the Face and the Production of Facial Expressions Activate Neurons in the Primate Amygdala

    PubMed Central

    Mosher, Clayton P.; Zimmerman, Prisca E.; Fuglevand, Andrew J.

    2016-01-01

    Abstract The majority of neurophysiological studies that have explored the role of the primate amygdala in the evaluation of social signals have relied on visual stimuli such as images of facial expressions. Vision, however, is not the only sensory modality that carries social signals. Both humans and nonhuman primates exchange emotionally meaningful social signals through touch. Indeed, social grooming in nonhuman primates and caressing touch in humans is critical for building lasting and reassuring social bonds. To determine the role of the amygdala in processing touch, we recorded the responses of single neurons in the macaque amygdala while we applied tactile stimuli to the face. We found that one-third of the recorded neurons responded to tactile stimulation. Although we recorded exclusively from the right amygdala, the receptive fields of 98% of the neurons were bilateral. A fraction of these tactile neurons were monitored during the production of facial expressions and during facial movements elicited occasionally by touch stimuli. Firing rates arising during the production of facial expressions were similar to those elicited by tactile stimulation. In a subset of cells, combining tactile stimulation with facial movement further augmented the firing rates. This suggests that tactile neurons in the amygdala receive input from skin mechanoceptors that are activated by touch and by compressions and stretches of the facial skin during the contraction of the underlying muscles. Tactile neurons in the amygdala may play a role in extracting the valence of touch stimuli and/or monitoring the facial expressions of self during social interactions. PMID:27752543

  9. Tactile Stimulation of the Face and the Production of Facial Expressions Activate Neurons in the Primate Amygdala.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Fuglevand, Andrew J; Gothard, Katalin M

    2016-01-01

    The majority of neurophysiological studies that have explored the role of the primate amygdala in the evaluation of social signals have relied on visual stimuli such as images of facial expressions. Vision, however, is not the only sensory modality that carries social signals. Both humans and nonhuman primates exchange emotionally meaningful social signals through touch. Indeed, social grooming in nonhuman primates and caressing touch in humans is critical for building lasting and reassuring social bonds. To determine the role of the amygdala in processing touch, we recorded the responses of single neurons in the macaque amygdala while we applied tactile stimuli to the face. We found that one-third of the recorded neurons responded to tactile stimulation. Although we recorded exclusively from the right amygdala, the receptive fields of 98% of the neurons were bilateral. A fraction of these tactile neurons were monitored during the production of facial expressions and during facial movements elicited occasionally by touch stimuli. Firing rates arising during the production of facial expressions were similar to those elicited by tactile stimulation. In a subset of cells, combining tactile stimulation with facial movement further augmented the firing rates. This suggests that tactile neurons in the amygdala receive input from skin mechanoceptors that are activated by touch and by compressions and stretches of the facial skin during the contraction of the underlying muscles. Tactile neurons in the amygdala may play a role in extracting the valence of touch stimuli and/or monitoring the facial expressions of self during social interactions.

  10. Spontaneous and posed facial expression in Parkinson's disease.

    PubMed

    Smith, M C; Smith, M K; Ellgring, H

    1996-09-01

    Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.

  11. Communicating with Virtual Humans.

    ERIC Educational Resources Information Center

    Thalmann, Nadia Magnenat

    The face is a small part of a human, but it plays an essential role in communication. An open hybrid system for facial animation is presented. It encapsulates a considerable amount of information regarding facial models, movements, expressions, emotions, and speech. The complex description of facial animation can be handled better by assigning…

  12. Generating and Describing Affective Eye Behaviors

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Zheng

    The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.

  13. A comparison of facial expression properties in five hylobatid species.

    PubMed

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-07-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. © 2014 Wiley Periodicals, Inc.

  14. "You Should Have Seen the Look on Your Face…": Self-awareness of Facial Expressions.

    PubMed

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities.

  15. “You Should Have Seen the Look on Your Face…”: Self-awareness of Facial Expressions

    PubMed Central

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals’ awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities. PMID:28611703

  16. Mapping spontaneous facial expression in people with Parkinson's disease: A multiple case study design.

    PubMed

    Gunnery, Sarah D; Naumova, Elena N; Saint-Hilaire, Marie; Tickle-Degnen, Linda

    2017-01-01

    People with Parkinson's disease (PD) often experience a decrease in their facial expressivity, but little is known about how the coordinated movements across regions of the face are impaired in PD. The face has neurologically independent regions that coordinate to articulate distinct social meanings that others perceive as gestalt expressions, and so understanding how different regions of the face are affected is important. Using the Facial Action Coding System, this study comprehensively measured spontaneous facial expression across 600 frames for a multiple case study of people with PD who were rated as having varying degrees of facial expression deficits, and created correlation matrices for frequency and intensity of produced muscle activations across different areas of the face. Data visualization techniques were used to create temporal and correlational mappings of muscle action in the face at different degrees of facial expressivity. Results showed that as severity of facial expression deficit increased, there was a decrease in number, duration, intensity, and coactivation of facial muscle action. This understanding of how regions of the parkinsonian face move independently and in conjunction with other regions will provide a new focus for future research aiming to model how facial expression in PD relates to disease progression, stigma, and quality of life.

  17. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    PubMed

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  18. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Automatic detection of confusion in elderly users of a web-based health instruction video.

    PubMed

    Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek

    2015-06-01

    Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.

  20. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.

    PubMed

    Balconi, Michela; Canavesio, Ylenia

    2016-01-01

    The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.

  2. Four not six: Revealing culturally common facial expressions of emotion.

    PubMed

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. A study of patient facial expressivity in relation to orthodontic/surgical treatment.

    PubMed

    Nafziger, Y J

    1994-09-01

    A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.

  4. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach

    PubMed Central

    Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi

    2017-01-01

    While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979

  5. Anatomy of emotion: a 3D study of facial mimicry.

    PubMed

    Ferrario, V F; Sforza, C

    2007-01-01

    Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.

  6. The face of pain--a pilot study to validate the measurement of facial pain expression with an improved electromyogram method.

    PubMed

    Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus

    2005-01-01

    The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.

  7. Blend Shape Interpolation and FACS for Realistic Avatar

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila

    2015-03-01

    The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.

  8. Self-reported empathy and neural activity during action imitation and observation in schizophrenia

    PubMed Central

    Horan, William P.; Iacoboni, Marco; Cross, Katy A.; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K.; Green, Michael F.

    2014-01-01

    Introduction Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. Methods 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Results Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Conclusions Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy. PMID:25009771

  9. Self-reported empathy and neural activity during action imitation and observation in schizophrenia.

    PubMed

    Horan, William P; Iacoboni, Marco; Cross, Katy A; Korb, Alex; Lee, Junghee; Nori, Poorang; Quintana, Javier; Wynn, Jonathan K; Green, Michael F

    2014-01-01

    Although social cognitive impairments are key determinants of functional outcome in schizophrenia their neural bases are poorly understood. This study investigated neural activity during imitation and observation of finger movements and facial expressions in schizophrenia, and their correlates with self-reported empathy. 23 schizophrenia outpatients and 23 healthy controls were studied with functional magnetic resonance imaging (fMRI) while they imitated, executed, or simply observed finger movements and facial emotional expressions. Between-group activation differences, as well as relationships between activation and self-reported empathy, were evaluated. Both patients and controls similarly activated neural systems previously associated with these tasks. We found no significant between-group differences in task-related activations. There were, however, between-group differences in the correlation between self-reported empathy and right inferior frontal (pars opercularis) activity during observation of facial emotional expressions. As in previous studies, controls demonstrated a positive association between brain activity and empathy scores. In contrast, the pattern in the patient group reflected a negative association between brain activity and empathy. Although patients with schizophrenia demonstrated largely normal patterns of neural activation across the finger movement and facial expression tasks, they reported decreased self perceived empathy and failed to show the typical relationship between neural activity and self-reported empathy seen in controls. These findings suggest that patients show a disjunction between automatic neural responses to low level social cues and higher level, integrative social cognitive processes involved in self-perceived empathy.

  10. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  11. Facial expressions of emotion are not culturally universal.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  12. Facial expressions of emotion are not culturally universal

    PubMed Central

    Jack, Rachael E.; Garrod, Oliver G. B.; Yu, Hui; Caldara, Roberto; Schyns, Philippe G.

    2012-01-01

    Since Darwin’s seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843–850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind’s eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature–nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars. PMID:22509011

  13. Contemporary solutions for the treatment of facial nerve paralysis.

    PubMed

    Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R

    2015-06-01

    After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.

  14. Assessment of Emotional Expressions after Full-Face Transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Barçın, Nur Ebru; Tombak, Kadriye; Çolak, Ömer Halil

    2017-01-01

    We assessed clinical features as well as sensory and motor recoveries in 3 full-face transplantation patients. A frequency analysis was performed on facial surface electromyography data collected during 6 basic emotional expressions and 4 primary facial movements. Motor progress was assessed using the wavelet packet method by comparison against the mean results obtained from 10 healthy subjects. Analyses were conducted on 1 patient at approximately 1 year after face transplantation and at 2 years after transplantation in the remaining 2 patients. Motor recovery was observed following sensory recovery in all 3 patients; however, the 3 cases had different backgrounds and exhibited different degrees and rates of sensory and motor improvements after transplant. Wavelet packet energy was detected in all patients during emotional expressions and primary movements; however, there were fewer active channels during expressions in transplant patients compared to healthy individuals, and patterns of wavelet packet energy were different for each patient. Finally, high-frequency components were typically detected in patients during emotional expressions, but fewer channels demonstrated these high-frequency components in patients compared to healthy individuals. Our data suggest that the posttransplantation recovery of emotional facial expression requires neural plasticity.

  15. Facial Expression Training Optimises Viewing Strategy in Children and Adults

    PubMed Central

    Pollux, Petra M. J.; Hall, Sophie; Guo, Kun

    2014-01-01

    This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions. PMID:25144680

  16. Fixation Patterns of Chinese Participants while Identifying Facial Expressions on Chinese Faces

    PubMed Central

    Xia, Mu; Li, Xueliu; Zhong, Haiqing; Li, Hong

    2017-01-01

    Two experiments in this study were designed to explore a model of Chinese fixation with four types of native facial expressions—happy, peaceful, sad, and angry. In both experiments, participants performed an emotion recognition task while their behaviors and eye movements were recorded. Experiment 1 (24 participants, 12 men) demonstrated that both eye fixations and durations were lower for the upper part of the face than for the lower part of the face for all four types of facial expression. Experiment 2 (20 participants, 6 men) repeated this finding and excluded the disturbance of fixation point. These results indicate that Chinese participants demonstrated a superiority effect for the lower part of face while interpreting facial expressions, possibly due to the influence of eastern etiquette culture. PMID:28446896

  17. [Changes in facial nerve function, morphology and neurotrophic factor III expression following three types of facial nerve injury].

    PubMed

    Zhang, Lili; Wang, Haibo; Fan, Zhaomin; Han, Yuechen; Xu, Lei; Zhang, Haiyan

    2011-01-01

    To study the changes in facial nerve function, morphology and neurotrophic factor III (NT-3) expression following three types of facial nerve injury. Changes in facial nerve function (in terms of blink reflex (BF), vibrissae movement (VM) and position of nasal tip) were assessed in 45 rats in response to three types of facial nerve injury: partial section of the extratemporal segment (group one), partial section of the facial canal segment (group two) and complete transection of the facial canal segment lesion (group three). All facial nerves specimen were then cut into two parts at the site of the lesion after being taken from the lesion site on 1st, 7th, 21st post-surgery-days (PSD). Changes of morphology and NT-3 expression were evaluated using the improved trichrome stain and immunohistochemistry techniques ,respectively. Changes in facial nerve function: In group 1, all animals had no blink reflex (BF) and weak vibrissae movement (VM) at the 1st PSD; The blink reflex in 80% of the rats recovered partly and the vibrissae movement in 40% of the rats returned to normal at the 7th PSD; The facial nerve function in 600 of the rats was almost normal at the 21st PSD. In group 2, all left facial nerve paralyzed at the 1st PSD; The blink reflex partly recovered in 40% of the rats and the vibrissae movement was weak in 80% of the rats at the 7th PSD; 8000 of the rats'BF were almost normal and 40% of the rats' VM completely recovered at the 21st PSD. In group 3, The recovery couldn't happen at anytime. Changes in morphology: In group 1, the size of nerve fiber differed in facial canal segment and some of myelin sheath and axons degenerated at the 7th PSD; The fibres' degeneration turned into regeneration at the 21st PSD; In group 2, the morphologic changes in this group were familiar with the group 1 while the degenerated fibers were more and dispersed in transection at the 7th PSD; Regeneration of nerve fibers happened at the 21st PSD. In group 3, most of the fibers crumbled at the 7th PSD and no regeneration was seen at the 21st PSD. Changes in NT-3: Positive staining of NT-3 was largely observed in axons at the 7th PSD, although little NT-3 was seen in the normal fibers. Facial palsy of the rats in group 2 was more extensive than that in group 1 and their function partly recovers at the 21st PSD. The fibres' degeneration occurs not only dispersed throughout the injury site but also occurred throught the length of the nerve. NT-3 immunoreactivity increased in activated fibers after partial transection.

  18. Hemispheric mechanisms controlling voluntary and spontaneous facial expressions.

    PubMed

    Gazzaniga, M S; Smylie, C S

    1990-01-01

    The capacity of each disconnected cerebral hemisphere to control a variety of facial postures was examined in three split-brain patients. The dynamics of facial posturing were analyzed in 30-msec optical disc frames that were generated off videotape recordings of each patient's response to lateralized stimuli. The results revealed that commands presented to the left hemisphere effecting postures of the lower facial muscles showed a marked asymmetry, with the right side of the face sometimes responding up to 180 msec before the left side of the face. Commands presented to the right hemisphere elicited a response only if the posture involved moving the upper facial muscles. Spontaneous postures filmed during free conversation were symmetrical. The results suggest that while either hemisphere can generate spontaneous facial expressions only the left hemisphere is efficient at generating voluntaly expressions. This contrasts sharply with the fact that both hemispheres can carry out a wide variety of other voluntary movements with the hand and foot.

  19. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.

    PubMed

    Yitzhak, Neta; Giladi, Nir; Gurevich, Tanya; Messinger, Daniel S; Prince, Emily B; Martin, Katherine; Aviezer, Hillel

    2017-12-01

    According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Importance of the brow in facial expressiveness during human communication.

    PubMed

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  1. The improvement of movement and speech during rapid eye movement sleep behaviour disorder in multiple system atrophy.

    PubMed

    De Cock, Valérie Cochen; Debs, Rachel; Oudiette, Delphine; Leu, Smaranda; Radji, Fatai; Tiberge, Michel; Yu, Huan; Bayard, Sophie; Roze, Emmanuel; Vidailhet, Marie; Dauvilliers, Yves; Rascol, Olivier; Arnulf, Isabelle

    2011-03-01

    Multiple system atrophy is an atypical parkinsonism characterized by severe motor disabilities that are poorly levodopa responsive. Most patients develop rapid eye movement sleep behaviour disorder. Because parkinsonism is absent during rapid eye movement sleep behaviour disorder in patients with Parkinson's disease, we studied the movements of patients with multiple system atrophy during rapid eye movement sleep. Forty-nine non-demented patients with multiple system atrophy and 49 patients with idiopathic Parkinson's disease were interviewed along with their 98 bed partners using a structured questionnaire. They rated the quality of movements, vocal and facial expressions during rapid eye movement sleep behaviour disorder as better than, equal to or worse than the same activities in an awake state. Sleep and movements were monitored using video-polysomnography in 22/49 patients with multiple system atrophy and in 19/49 patients with Parkinson's disease. These recordings were analysed for the presence of parkinsonism and cerebellar syndrome during rapid eye movement sleep movements. Clinical rapid eye movement sleep behaviour disorder was observed in 43/49 (88%) patients with multiple system atrophy. Reports from the 31/43 bed partners who were able to evaluate movements during sleep indicate that 81% of the patients showed some form of improvement during rapid eye movement sleep behaviour disorder. These included improved movement (73% of patients: faster, 67%; stronger, 52%; and smoother, 26%), improved speech (59% of patients: louder, 55%; more intelligible, 17%; and better articulated, 36%) and normalized facial expression (50% of patients). The rate of improvement was higher in Parkinson's disease than in multiple system atrophy, but no further difference was observed between the two forms of multiple system atrophy (predominant parkinsonism versus cerebellar syndrome). Video-monitored movements during rapid eye movement sleep in patients with multiple system atrophy revealed more expressive faces, and movements that were faster and more ample in comparison with facial expression and movements during wakefulness. These movements were still somewhat jerky but lacked any visible parkinsonism. Cerebellar signs were not assessable. We conclude that parkinsonism also disappears during rapid eye movement sleep behaviour disorder in patients with multiple system atrophy, but this improvement is not due to enhanced dopamine transmission because these patients are not levodopa-sensitive. These data suggest that these movements are not influenced by extrapyramidal regions; however, the influence of abnormal cerebellar control remains unclear. The transient disappearance of parkinsonism here is all the more surprising since no treatment (even dopaminergic) provides a real benefit in this disabling disease.

  2. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women.

    PubMed

    Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects' Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability.

  3. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women

    PubMed Central

    Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects’ Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability. PMID:27977692

  4. The role of visual experience in the production of emotional facial expressions by blind people: a review.

    PubMed

    Valente, Dannyelle; Theurel, Anne; Gentaz, Edouard

    2018-04-01

    Facial expressions of emotion are nonverbal behaviors that allow us to interact efficiently in social life and respond to events affecting our welfare. This article reviews 21 studies, published between 1932 and 2015, examining the production of facial expressions of emotion by blind people. It particularly discusses the impact of visual experience on the development of this behavior from birth to adulthood. After a discussion of three methodological considerations, the review of studies reveals that blind subjects demonstrate differing capacities for producing spontaneous expressions and voluntarily posed expressions. Seventeen studies provided evidence that blind and sighted spontaneously produce the same pattern of facial expressions, even if some variations can be found, reflecting facial and body movements specific to blindness or differences in intensity and control of emotions in some specific contexts. This suggests that lack of visual experience seems to not have a major impact when this behavior is generated spontaneously in real emotional contexts. In contrast, eight studies examining voluntary expressions indicate that blind individuals have difficulty posing emotional expressions. The opportunity for prior visual observation seems to affect performance in this case. Finally, we discuss three new directions for research to provide additional and strong evidence for the debate regarding the innate or the culture-constant learning character of the production of emotional facial expressions by blind individuals: the link between perception and production of facial expressions, the impact of display rules in the absence of vision, and the role of other channels in expression of emotions in the context of blindness.

  5. Facial expression: An under-utilised tool for the assessment of welfare in mammals.

    PubMed

    Descovich, Kris A; Wathan, Jennifer; Leach, Matthew C; Buchanan-Smith, Hannah M; Flecknell, Paul; Farningham, David; Vick, Sarah-Jane

    2017-01-01

    Animal welfare is a key issue for industries that use or impact upon animals. The accurate identification of welfare states is particularly relevant to the field of bioscience, where the 3Rs framework encourages refinement of experimental procedures involving animal models. The assessment and improvement of welfare states in animals depends on reliable and valid measurement tools. Behavioral measures (activity, attention, posture and vocalization) are frequently used because they are immediate and non-invasive, however no single indicator can yield a complete picture of the internal state of an animal. Facial expressions are extensively studied in humans as a measure of psychological and emotional experiences but are infrequently used in animal studies, with the exception of emerging research on pain behavior. In this review, we discuss current evidence for facial representations of underlying affective states, and how communicative or functional expressions can be useful within welfare assessments. Validated tools for measuring facial movement are outlined, and the potential of expressions as honest signals is discussed, alongside other challenges and limitations to facial expression measurement within the context of animal welfare. We conclude that facial expression determination in animals is a useful but underutilized measure that complements existing tools in the assessment of welfare.

  6. Facial color is an efficient mechanism to visually transmit emotion

    PubMed Central

    Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash

    2018-01-01

    Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780

  7. Facial color is an efficient mechanism to visually transmit emotion.

    PubMed

    Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M

    2018-04-03

    Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.

  8. Deep facial analysis: A new phase I epilepsy evaluation using computer vision.

    PubMed

    Ahmedt-Aristizabal, David; Fookes, Clinton; Nguyen, Kien; Denman, Simon; Sridharan, Sridha; Dionisio, Sasha

    2018-05-01

    Semiology observation and characterization play a major role in the presurgical evaluation of epilepsy. However, the interpretation of patient movements has subjective and intrinsic challenges. In this paper, we develop approaches to attempt to automatically extract and classify semiological patterns from facial expressions. We address limitations of existing computer-based analytical approaches of epilepsy monitoring, where facial movements have largely been ignored. This is an area that has seen limited advances in the literature. Inspired by recent advances in deep learning, we propose two deep learning models, landmark-based and region-based, to quantitatively identify changes in facial semiology in patients with mesial temporal lobe epilepsy (MTLE) from spontaneous expressions during phase I monitoring. A dataset has been collected from the Mater Advanced Epilepsy Unit (Brisbane, Australia) and is used to evaluate our proposed approach. Our experiments show that a landmark-based approach achieves promising results in analyzing facial semiology, where movements can be effectively marked and tracked when there is a frontal face on visualization. However, the region-based counterpart with spatiotemporal features achieves more accurate results when confronted with extreme head positions. A multifold cross-validation of the region-based approach exhibited an average test accuracy of 95.19% and an average AUC of 0.98 of the ROC curve. Conversely, a leave-one-subject-out cross-validation scheme for the same approach reveals a reduction in accuracy for the model as it is affected by data limitations and achieves an average test accuracy of 50.85%. Overall, the proposed deep learning models have shown promise in quantifying ictal facial movements in patients with MTLE. In turn, this may serve to enhance the automated presurgical epilepsy evaluation by allowing for standardization, mitigating bias, and assessing key features. The computer-aided diagnosis may help to support clinical decision-making and prevent erroneous localization and surgery. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Let's Start the Music.

    ERIC Educational Resources Information Center

    Ivankovic, Patricia; Gilpatrick, Ingrid

    1994-01-01

    A preschool/kindergarten program that serves 18 students with deafness/hearing impairments and 11 hearing children uses music activities to develop speech, audition, language, and movement and to further explore current classroom themes. A deaf teacher models body movements, signs, rhythm, and facial expressions while a hearing teacher models…

  10. Dielectric elastomer actuators for facial expression

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhe; Zhu, Jian

    2016-04-01

    Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.

  11. What's behind the mask? A look at blood flow changes with prolonged facial pressure and expression using laser Doppler imaging.

    PubMed

    Van-Buendia, Lan B; Allely, Rebekah R; Lassiter, Ronald; Weinand, Christian; Jordan, Marion H; Jeng, James C

    2010-01-01

    Clinically, the initial blanching in burn scar seen on transparent plastic face mask application seems to diminish with time and movement requiring mask alteration. To date, studies quantifying perfusion with prolonged mask use do not exist. This study used laser Doppler imaging (LDI) to assess perfusion through the transparent face mask and movement in subjects with and without burn over time. Five subjects fitted with transparent face masks were scanned with the LDI on four occasions. The four subjects without burn were scanned in the following manner: 1) no mask, 2) mask on while at rest, 3) mask on with alternating intervals of sustained facial expression and rest, and 4) after mask removal. Images were acquired every 3 minutes throughout the 85-minute study period. The subject with burn underwent a shortened scanning protocol to increase comfort. Each face was divided into five regions of interest for analysis. Compared with baseline, mask application decreased perfusion significantly in all subjects (P < .0001). Perfusion did not change during the rest period. There were no significant differences with changing facial expression in any of the regions of interest. On mask removal, all regions of the face demonstrated a hyperemic effect with the chin (P = .05) and each cheek (P < .0001) reaching statistical significance. Perfusion levels did not return to baseline in the chin and cheeks after 30 minutes of mask removal. Perfusions remain constantly low while wearing the face mask, despite changing facial expressions. Changing facial expressions with the mask on did not alter perfusion. Hyperemic response occurs on removal of the mask. This study exposed methodology and statistical issues worth considering when conducting future research with the face, pressure therapy, and with LDI technology.

  12. Enhancing facial aesthetics with muscle retraining exercises-a review.

    PubMed

    D'souza, Raina; Kini, Ashwini; D'souza, Henston; Shetty, Nitin; Shetty, Omkar

    2014-08-01

    Facial attractiveness plays a key role in social interaction. 'Smile' is not only a single category of facial behaviour, but also the emotion of frank joy which is expressed on the face by the combined contraction of the muscles involved. When a patient visits the dental clinic for aesthetic reasons, the dentist considers not only the chief complaint but also the overall harmony of the face. This article describes muscle retraining exercises to achieve control over facial movements and improve facial appearance which may be considered following any type of dental rehabilitation. Muscle conditioning, training and strengthening through daily exercises will help to counter balance the aging effects.

  13. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy*

    PubMed Central

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-01-01

    BACKGROUND: Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). METHODS: In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. RESULTS: The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). CONCLUSIONS: Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead. PMID:22973325

  14. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy.

    PubMed

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-10-01

    Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead.

  15. Gelotophobia and the Challenges of Implementing Laughter into Virtual Agents Interactions

    PubMed Central

    Ruch, Willibald F.; Platt, Tracey; Hofmann, Jennifer; Niewiadomski, Radosław; Urbain, Jérôme; Mancini, Maurizio; Dupont, Stéphane

    2014-01-01

    This study investigated which features of AVATAR laughter are perceived threatening for individuals with a fear of being laughed at (gelotophobia), and individuals with no gelotophobia. Laughter samples were systematically varied (e.g., intensity, laughter pitch, and energy for the voice, intensity of facial actions of the face) in three modalities: animated facial expressions, synthesized auditory laughter vocalizations, and motion capture generated puppets displaying laughter body movements. In the online study 123 adults completed, the GELOPH <15 > (Ruch and Proyer, 2008a,b) and rated randomly presented videos of the three modalities for how malicious, how friendly, how real the laughter was (0 not at all to 8 extremely). Additionally, an open question asked which markers led to the perception of friendliness/maliciousness. The current study identified features in all modalities of laughter stimuli that were perceived as malicious in general, and some that were gelotophobia specific. For facial expressions of AVATARS, medium intensity laughs triggered highest maliciousness in the gelotophobes. In the auditory stimuli, the fundamental frequency modulations and the variation in intensity were indicative of maliciousness. In the body, backwards and forward movements and rocking vs. jerking movements distinguished the most malicious from the least malicious laugh. From the open answers, the shape and appearance of the lips curling induced feelings that the expression was malicious for non-gelotophobes and that the movement round the eyes, elicited the face to appear as friendly. This was opposite for gelotophobes. Gelotophobia savvy AVATARS should be of high intensity, containing lip and eye movements and be fast, non-repetitive voiced vocalization, variable and of short duration. It should not contain any features that indicate a down-regulation in the voice or body, or indicate voluntary/cognitive modulation. PMID:25477803

  16. [Lengthening temporalis myoplasty: A new approach to facial rehabilitation with the "mirror-effect" method].

    PubMed

    Blanchin, T; Martin, F; Labbe, D

    2013-12-01

    Peripheral facial paralysis often reveals two conditions that are hard to control: labial occlusion and palpebral closure. Today, there are efforts to go beyond the sole use of muscle stimulation techniques, and attention is being given to cerebral plasticity stimulation? This implies using the facial nerves' efferent pathway as the afferent pathway in rehabilitation. This technique could further help limit the two recalcitrant problems, above. We matched two groups of patients who underwent surgery for peripheral facial paralysis by lengthening the temporalis myoplasty (LTM). LTM is one of the best ways to examine cerebral plasticity. The trigeminal nerve is a mixed nerve and is both motor and sensory. After a LTM, patients have to use the trigeminal nerve differently, as it now has a direct role in generating the smile. The LTM approach, using the efferent pathway, therefore, creates a challenge for the brain. The two groups followed separate therapies called "classical" and "mirror-effect". The "mirror-effect" method gave a more precise orientation of the patient's cerebral plasticity than did the classical rehabilitation. The method develops two axes: voluntary movements patients need to control their temporal smile; and spontaneous movements needed for facial expressions. Work on voluntary movements is done before a "digital mirror", using an identical doubled hemiface, providing the patient with a fake copy of his face and, thus, a 7 "mirror-effect". The spontaneous movements work is based on what we call the "Therapy of Motor Emotions". The method presented here is used to treat facial paralysis (Bell's Palsies type), whether requiring surgery or not. Importantly, the facial nerve, like the trigeminal nerve above, is also a mixed nerve and is stimulated through the efferent pathway in the same manner. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  17. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    Scherer's Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes.

  18. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion

    PubMed Central

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R.

    2015-01-01

    Scherer’s Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes. PMID:26295338

  19. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    PubMed

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  20. Cortical representation of facial and tongue movements: a task functional magnetic resonance imaging study.

    PubMed

    Xiao, Fu-Long; Gao, Pei-Yi; Qian, Tian-Yi; Sui, Bin-Bin; Xue, Jing; Zhou, Jian; Lin, Yan

    2017-05-01

    Functional magnetic resonance imaging (fMRI) mapping can present the activated cortical area during movement, while little is known about precise location in facial and tongue movements. To investigate the representation of facial and tongue movements by task fMRI. Twenty right-handed healthy subjects were underwent block design task fMRI examination. Task movements included lip pursing, cheek bulging, grinning and vertical tongue excursion. Statistical parametric mapping (SPM8) was applied to analysis the data. One-sample t-test was used to calculate the common activation area between facial and tongue movements. Also, paired t-test was used to test for areas of over- or underactivation in tongue movement compared with each group of facial movements. The common areas within facial and tongue movements suggested the similar motor circuits of activation in both movements. Prior activation in tongue movement was situated laterally and inferiorly in sensorimotor area relative to facial movements. Prior activation of tongue movement was investigated in left superior parietal lobe relative to lip pursing. Also, prior activation in bilateral cuneus lobe in grinning compared with tongue movement was detected. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  1. Role of the first and second person perspective for control of behaviour: Understanding other people's facial expressions.

    PubMed

    Potthoff, Denise; Seitz, Rüdiger J

    2015-12-01

    Humans typically make probabilistic inferences about another person's affective state based on her/his bodily movements such as emotional facial expressions, emblematic gestures and whole body movements. Furthermore, humans deduce tentative predictions about the other person's intentions. Thus, the first person perspective of a subject is supplemented by the second person perspective involving theory of mind and empathy. Neuroimaging investigations have shown that the medial and lateral frontal cortex are critical nodes in the circuits underlying theory of mind, empathy, as well as intention of action. It is suggested that personal perspective taking in social interactions is paradigmatic for the capability of humans to generate probabilistic accounts of the outside world that underlie a person's control of behaviour. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Perceiving emotions in neutral faces: expression processing is biased by affective person knowledge.

    PubMed

    Suess, Franziska; Rabovsky, Milena; Abdel Rahman, Rasha

    2015-04-01

    According to a widely held view, basic emotions such as happiness or anger are reflected in facial expressions that are invariant and uniquely defined by specific facial muscle movements. Accordingly, expression perception should not be vulnerable to influences outside the face. Here, we test this assumption by manipulating the emotional valence of biographical knowledge associated with individual persons. Faces of well-known and initially unfamiliar persons displaying neutral expressions were associated with socially relevant negative, positive or comparatively neutral biographical information. The expressions of faces associated with negative information were classified as more negative than faces associated with neutral information. Event-related brain potential modulations in the early posterior negativity, a component taken to reflect early sensory processing of affective stimuli such as emotional facial expressions, suggest that negative affective knowledge can bias the perception of faces with neutral expressions toward subjectively displaying negative emotions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Another Scale for the Assessment of Facial Paralysis? ADS Scale: Our Proposition, How to Use It.

    PubMed

    Di Stadio, Arianna

    2015-12-01

    Several authors in the years propose different methods to evaluate areas and specific movement's disease in patient affected by facial palsy. Despite these efforts the House Brackmann is anyway the most used assessment in medical community. The aims of our study is the proposition and assessing a new rating Arianna Disease Scale (ADS) for the clinical evaluation of facial paralysis. Sixty patients affected by unilateral facial Bell paralysis were enrolled in a prospective study from 2012 to 2014. Their facial nerve function was evaluated with our assessment analysing facial district divided in upper, middle and lower third. We analysed different facial expressions. Each movement corresponded to the action of different muscles. The action of each muscle was scored from 0 to 1, with 0 corresponding from complete flaccid paralysis to muscle's normal function ending with a score of 1. Synkinesis was considered and evaluated also in the scale with a fixed 0.5 score. Our results considered ease and speed of evaluation of the assessment, the accuracy of muscle deficit and the ability to calculate synkinesis using a score. All the three observers agreed 100% in the highest degree of deficit. We found some discrepancies in intermediate score with 92% agreement in upper face, 87% in middle and 80% in lower face, where there were more muscles involved in movements. Our scale had some limitations linked to the small group of patients evaluated and we had a little difficulty understanding the intermediate score of 0.3 and 0.7. However, this was an accurate tool to quickly evaluate facial nerve function. This has potential as an alternative scale to and to diagnose facial nerve disorders.

  4. Enhancing Facial Aesthetics with Muscle Retraining Exercises-A Review

    PubMed Central

    D’souza, Raina; Kini, Ashwini; D’souza, Henston; Shetty, Omkar

    2014-01-01

    Facial attractiveness plays a key role in social interaction. ‘Smile’ is not only a single category of facial behaviour, but also the emotion of frank joy which is expressed on the face by the combined contraction of the muscles involved. When a patient visits the dental clinic for aesthetic reasons, the dentist considers not only the chief complaint but also the overall harmony of the face. This article describes muscle retraining exercises to achieve control over facial movements and improve facial appearance which may be considered following any type of dental rehabilitation. Muscle conditioning, training and strengthening through daily exercises will help to counter balance the aging effects. PMID:25302289

  5. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  6. Languages and interfaces for facial animation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnenat-Thalmann, N.

    1995-05-01

    This paper describes high-level tools for specifying, controlling, and synchronizing temporal and spatial characteristics for 3D animation of facial expressions. The proposed approach consists of hierarchical levels of controls. Specification of expressions, phonemes, emotions, sentences, and head movements by means of a high-level language is shown. The various aspects of synchronization are also emphasized. Then, association of the control different interactive devices and media which allows the animator greater flexibility and freedom, is discussed. Experiments with input accessories such as the keyboard of a music synthesizer and gestures from the DataGlove are illustrated.

  7. Intact mirror mechanisms for automatic facial emotions in children and adolescents with autism spectrum disorder.

    PubMed

    Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin

    2017-02-01

    It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Facial recognition in primary focal dystonia.

    PubMed

    Rinnerthaler, Martina; Benecke, Cord; Bartha, Lisa; Entner, Tanja; Poewe, Werner; Mueller, Joerg

    2006-01-01

    The basal ganglia seem to be involved in emotional processing. Primary dystonia is a movement disorder considered to result from basal ganglia dysfunction, and the aim of the present study was to investigate emotion recognition in patients with primary focal dystonia. Thirty-two patients with primary cranial (n=12) and cervical (n=20) dystonia were compared to 32 healthy controls matched for age, sex, and educational level on the facially expressed emotion labeling (FEEL) test, a computer-based tool measuring a person's ability to recognize facially expressed emotions. Patients with cognitive impairment or depression were excluded. None of the patients received medication with a possible cognitive side effect profile and only those with mild to moderate dystonia were included. Patients with primary dystonia showed isolated deficits in the recognition of disgust (P=0.007), while no differences between patients and controls were found with regard to the other emotions (fear, happiness, surprise, sadness, and anger). The findings of the present study add further evidence to the conception that dystonia is not only a motor but a complex basal ganglia disorder including selective emotion recognition disturbances. Copyright (c) 2005 Movement Disorder Society.

  9. Body language in health care: a contribution to nursing communication.

    PubMed

    de Rezende, Rachel de Carvalho; de Oliveira, Rosane Mara Pontes; de Araújo, Sílvia Teresa Carvalho; Guimarães, Tereza Cristina Felippe; do Espírito Santo, Fátima Helena; Porto, Isaura Setenta

    2015-01-01

    to classify body language used in nursing care, and propose "Body language in nursing care" as an analytical category for nursing communication. quantitative research with the systematic observation of 21:43 care situations, with 21 members representing the nursing teams of two hospitals. Empirical categories: sound, facial, eye and body expressions. sound expressions emphasized laughter. Facial expressions communicated satisfaction and happiness. Eye contact with members stood out in visual expressions. The most frequent body expressions were head movements and indistinct touches. nursing care team members use body language to establish rapport with patients, clarify their needs and plan care. The study classified body language characteristics of humanized care, which involves, in addition to technical, non-technical issues arising from nursing communication.

  10. Chimpanzees (Pan troglodytes) Produce the Same Types of ‘Laugh Faces’ when They Emit Laughter and when They Are Silent

    PubMed Central

    Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A.

    2015-01-01

    The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates’ open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans. PMID:26061420

  11. Chimpanzees (Pan troglodytes) Produce the Same Types of 'Laugh Faces' when They Emit Laughter and when They Are Silent.

    PubMed

    Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A

    2015-01-01

    The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates' open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans.

  12. Facial expressions and the evolution of the speech rhythm.

    PubMed

    Ghazanfar, Asif A; Takahashi, Daniel Y

    2014-06-01

    In primates, different vocalizations are produced, at least in part, by making different facial expressions. Not surprisingly, humans, apes, and monkeys all recognize the correspondence between vocalizations and the facial postures associated with them. However, one major dissimilarity between monkey vocalizations and human speech is that, in the latter, the acoustic output and associated movements of the mouth are both rhythmic (in the 3- to 8-Hz range) and tightly correlated, whereas monkey vocalizations have a similar acoustic rhythmicity but lack the concommitant rhythmic facial motion. This raises the question of how we evolved from a presumptive ancestral acoustic-only vocal rhythm to the one that is audiovisual with improved perceptual sensitivity. According to one hypothesis, this bisensory speech rhythm evolved through the rhythmic facial expressions of ancestral primates. If this hypothesis has any validity, we expect that the extant nonhuman primates produce at least some facial expressions with a speech-like rhythm in the 3- to 8-Hz frequency range. Lip smacking, an affiliative signal observed in many genera of primates, satisfies this criterion. We review a series of studies using developmental, x-ray cineradiographic, EMG, and perceptual approaches with macaque monkeys producing lip smacks to further investigate this hypothesis. We then explore its putative neural basis and remark on important differences between lip smacking and speech production. Overall, the data support the hypothesis that lip smacking may have been an ancestral expression that was linked to vocal output to produce the original rhythmic audiovisual speech-like utterances in the human lineage.

  13. Emotion categories and dimensions in the facial communication of affect: An integrated approach.

    PubMed

    Mehu, Marc; Scherer, Klaus R

    2015-12-01

    We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).

  14. Operant conditioning of facial displays of pain.

    PubMed

    Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan

    2011-06-01

    The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p < .001) but not in the yoked control group (p > .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.

  15. Is evaluation of humorous stimuli associated with frontal cortex morphology? A pilot study using facial micro-movement analysis and MRI.

    PubMed

    Juckel, Georg; Mergl, Roland; Brüne, Martin; Villeneuve, Isabelle; Frodl, Thomas; Schmitt, Gisela; Zetzsche, Thomas; Born, Christine; Hahn, Klaus; Reiser, Maximilian; Möller, Hans-Jürgen; Bär, Karl-Jürgen; Hegerl, Ulrich; Meisenzahl, Eva Maria

    2011-05-01

    Humour involves the ability to detect incongruous ideas violating social rules and norms. Accordingly, humour requires a complex array of cognitive skills for which intact frontal lobe functioning is critical. Here, we sought to examine the association of facial expression during an emotion inducing experiment with frontal cortex morphology in healthy subjects. Thirty-one healthy male subjects (mean age: 30.8±8.9 years; all right-handers) watching a humorous movie ("Mr. Bean") were investigated. Markers fixed at certain points of the face emitting high-frequency ultrasonic signals allowed direct measurement of facial movements with high spatial-temporal resolution. Magnetic resonance images of the frontal cortex were obtained with a 1.5-T Magnetom using a coronar T2- and protondensity-weighted Dual-Echo-Sequence and a 3D-magnetization-prepared rapid gradient echo (MPRAGE) sequence. Volumetric analysis was performed using BRAINS. Frontal cortex volume was partly associated with slower speed of "laughing" movements of the eyes ("genuine" or Duchenne smile). Specifically, grey matter volume was associated with longer emotional reaction time ipsilaterally, even when controlled for age and daily alcohol intake. These results lend support to the hypothesis that superior cognitive evaluation of humorous stimuli - mediated by larger prefrontal grey and white matter volume - leads to a measurable reduction of speed of emotional expressivity in normal adults. Copyright © 2010 Elsevier Srl. All rights reserved.

  16. Infant brain activity while viewing facial movement of point-light displays as measured by near-infrared spectroscopy (NIRS).

    PubMed

    Ichikawa, Hiroko; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke

    2010-09-27

    Adult observers can quickly identify specific actions performed by an invisible actor from the points of lights attached to the actor's head and major joints. Infants are also sensitive to biological motion and prefer to see it depicted by a dynamic point-light display. In detecting biological motion such as whole body and facial movements, neuroimaging studies have demonstrated the involvement of the occipitotemporal cortex, including the superior temporal sulcus (STS). In the present study, we used the point-light display technique and near-infrared spectroscopy (NIRS) to examine infant brain activity while viewing facial biological motion depicted in a point-light display. Dynamic facial point-light displays (PLD) were made from video recordings of three actors making a facial expression of surprise in a dark room. As in Bassili's study, about 80 luminous markers were scattered over the surface of the actor's faces. In the experiment, we measured infant's hemodynamic responses to these displays using NIRS. We hypothesized that infants would show different neural activity for upright and inverted PLD. The responses were compared to the baseline activation during the presentation of individual still images, which were frames extracted from the dynamic PLD. We found that the concentration of oxy-Hb increased in the right temporal area during the presentation of the upright PLD compared to that of the baseline period. This is the first study to demonstrate that infant's brain activity in face processing is induced only by the motion cue of facial movement depicted by dynamic PLD. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  18. The Dynamic Features of Lip Corners in Genuine and Posed Smiles

    PubMed Central

    Guo, Hui; Zhang, Xiao-Hui; Liang, Jun; Yan, Wen-Jing

    2018-01-01

    The smile is a frequently expressed facial expression that typically conveys a positive emotional state and friendly intent. However, human beings have also learned how to fake smiles, typically by controlling the mouth to provide a genuine-looking expression. This is often accompanied by inaccuracies that can allow others to determine that the smile is false. Mouth movement is one of the most striking features of the smile, yet our understanding of its dynamic elements is still limited. The present study analyzes the dynamic features of lip corners, and considers how they differ between genuine and posed smiles. Employing computer vision techniques, we investigated elements such as the duration, intensity, speed, symmetry of the lip corners, and certain irregularities in genuine and posed smiles obtained from the UvA-NEMO Smile Database. After utilizing the facial analysis tool OpenFace, we further propose a new approach to segmenting the onset, apex, and offset phases of smiles, as well as a means of measuring irregularities and symmetry in facial expressions. We extracted these features according to 2D and 3D coordinates, and conducted an analysis. The results reveal that genuine smiles have higher values for onset, offset, apex, and total durations, as well as offset displacement, and a variable we termed Irregularity-b (the SD of the apex phase) than do posed smiles. Conversely, values tended to be lower for onset and offset Speeds, and Irregularity-a (the rate of peaks), Symmetry-a (the correlation between left and right facial movements), and Symmetry-d (differences in onset frame numbers between the left and right faces). The findings from the present study have been compared to those of previous research, and certain speculations are made. PMID:29515508

  19. Measuring Facial Movement

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1976-01-01

    The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)

  20. Culture, gender and health care stigma: Practitioners' response to facial masking experienced by people with Parkinson's disease.

    PubMed

    Tickle-Degnen, Linda; Zebrowitz, Leslie A; Ma, Hui-ing

    2011-07-01

    Facial masking in Parkinson's disease is the reduction of automatic and controlled expressive movement of facial musculature, creating an appearance of apathy, social disengagement or compromised cognitive status. Research in western cultures demonstrates that practitioners form negatively biased impressions associated with patient masking. Socio-cultural norms about facial expressivity vary according to culture and gender, yet little research has studied the effect of these factors on practitioners' responses toward patients who vary in facial expressivity. This study evaluated the effect of masking, culture and gender on practitioners' impressions of patient psychological attributes. Practitioners (N = 284) in the United States and Taiwan judged 12 Caucasian American and 12 Asian Taiwanese women and men patients in video clips from interviews. Half of each patient group had a moderate degree of facial masking and the other half had near-normal expressivity. Practitioners in both countries judged patients with higher masking to be more depressed and less sociable, less socially supportive, and less cognitively competent than patients with lower masking. Practitioners were more biased by masking when judging the sociability of the American patients, and American practitioners' judgments of patient sociability were more negatively biased in response to masking than were those of Taiwanese practitioners. Practitioners were more biased by masking when judging the cognitive competence and social supportiveness of the Taiwanese patients, and Taiwanese practitioners' judgments of patient cognitive competence were more negatively biased in response to masking than were those of American practitioners. The negative response to higher masking was stronger in practitioner judgments of women than men patients, particularly American patients. The findings suggest local cultural values as well as ethnic and gender stereotypes operate on practitioners' use of facial expressivity in clinical impression formation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Culture, Gender and Health Care Stigma: Practitioners’ Response to Facial Masking Experienced by People with Parkinson’s Disease

    PubMed Central

    Tickle-Degnen, Linda; Zebrowitz, Leslie A.; Ma, Hui-ing

    2011-01-01

    Facial masking in Parkinson’s disease is the reduction of automatic and controlled expressive movement of facial musculature, creating an appearance of apathy, social disengagement or compromised cognitive status. Research in western cultures demonstrates that practitioners form negatively biased impressions associated with patient masking. Socio-cultural norms about facial expressivity vary according to culture and gender, yet little research has studied the effect of these factors on practitioners’ responses toward patients who vary in facial expressivity. This study evaluated the effect of masking, culture and gender on practitioners’ impressions of patient psychological attributes. Practitioners (N=284) in the United States and Taiwan judged 12 Caucasian American and 12 Asian Taiwanese women and men patients in video clips from interviews. Half of each patient group had a moderate degree of facial masking and the other half had near-normal expressivity. Practitioners in both countries judged patients with higher masking to be more depressed and less sociable, less socially supportive, and less cognitively competent than patients with lower masking. Practitioners were more biased by masking when judging the sociability of the American patients, and American practitioners’ judgments of patient sociability were more negatively biased in response to masking than were those of Taiwanese practitioners. Practitioners were more biased by masking when judging the cognitive competence and social supportiveness of the Taiwanese patients, and Taiwanese practitioners’ judgments of patient cognitive competence were more negatively biased in response to masking than were those of American practitioners. The negative response to higher masking was stronger in practitioner judgments of women than men patients, particularly American patients. The findings suggest local cultural values as well as ethnic and gender stereotypes operate on practitioners’ use of facial expressivity in clinical impression formation. PMID:21664737

  2. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  3. An overview of the cosmetic treatment of facial muscles with a new botulinum toxin.

    PubMed

    Wiest, Luitgard G

    2009-01-01

    Botulinum toxin (BTX) is used nowadays in a much more differentiated way with a much more individualized approach to the cosmetic treatment of patients. To the well known areas of the upper face new indications in the mid and lower face have been added. Microinjection techniques are increasingly used besides the classic intramuscular injection technique. BTX injections of the mid and lower face require small and smallest dosages. The perioral muscles act in concert to achieve the extraordinarily complex movements that control facial expressions, eating, and speech. As the mouth has horizontal as well as vertical movements, paralysis of these perioral muscles has a greater effect on facial function and appearance than does paralysis of muscles of the upper face, which move primarily in vertical direction. It is essential that BTX injections should achieve the desired cosmetic result with the minimum dose without any functional discomfort. In this paper the three-year clinical experience with average dosages for an optimal outcome in the treatment of facial muscles with a newly developed botulinum toxin type A (Xeomin) free from complexing proteins is presented.

  4. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.

    PubMed

    Reinl, Maren; Bartels, Andreas

    2014-11-15

    Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Seeing Emotions: A Review of Micro and Subtle Emotion Expression Training

    ERIC Educational Resources Information Center

    Poole, Ernest Andre

    2016-01-01

    In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the…

  6. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition.

    PubMed

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve.

  7. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  8. Neural correlates of mirth and laughter: a direct electrical cortical stimulation study.

    PubMed

    Yamao, Yukihiro; Matsumoto, Riki; Kunieda, Takeharu; Shibata, Sumiya; Shimotake, Akihiro; Kikuchi, Takayuki; Satow, Takeshi; Mikuni, Nobuhiro; Fukuyama, Hidenao; Ikeda, Akio; Miyamoto, Susumu

    2015-05-01

    Laughter consists of both motor and emotional aspects. The emotional component, known as mirth, is usually associated with the motor component, namely, bilateral facial movements. Previous electrical cortical stimulation (ES) studies revealed that mirth was associated with the basal temporal cortex, inferior frontal cortex, and medial frontal cortex. Functional neuroimaging implicated a role for the left inferior frontal and bilateral temporal cortices in humor processing. However, the neural origins and pathways linking mirth with facial movements are still unclear. We hereby report two cases with temporal lobe epilepsy undergoing subdural electrode implantation in whom ES of the left basal temporal cortex elicited both mirth and laughter-related facial muscle movements. In one case with normal hippocampus, high-frequency ES consistently caused contralateral facial movement, followed by bilateral facial movements with mirth. In contrast, in another case with hippocampal sclerosis (HS), ES elicited only mirth at low intensity and short duration, and eventually laughter at higher intensity and longer duration. In both cases, the basal temporal language area (BTLA) was located within or adjacent to the cortex where ES produced mirth. In conclusion, the present direct ES study demonstrated that 1) mirth had a close relationship with language function, 2) intact mesial temporal structures were actively engaged in the beginning of facial movements associated with mirth, and 3) these emotion-related facial movements had contralateral dominance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A Neural Basis of Facial Action Recognition in Humans

    PubMed Central

    Srinivasan, Ramprakash; Golomb, Julie D.

    2016-01-01

    By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688

  10. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern

    PubMed Central

    Mega, Laura F.; Volz, Kirsten G.

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an ‘intuitive group,’ instructed to rely on their “gut feeling” for the authenticity judgments, and a ‘deliberative group,’ instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the “gestalt” of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research. PMID:28676773

  11. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern.

    PubMed

    Mega, Laura F; Volz, Kirsten G

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an 'intuitive group,' instructed to rely on their "gut feeling" for the authenticity judgments, and a 'deliberative group,' instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the "gestalt" of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  12. Hypoglossal-facial nerve reconstruction using a Y-tube-conduit reduces aberrant synkinetic movements of the orbicularis oculi and vibrissal muscles in rats.

    PubMed

    Kaya, Yasemin; Ozsoy, Umut; Turhan, Murat; Angelov, Doychin N; Sarikcioglu, Levent

    2014-01-01

    The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis) which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex) and vibrissal (whisking) musculature. The abdominal aorta plus its bifurcation was harvested (N = 12) for Y-tube conduits. Animal groups comprised intact animals (Group 1), those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2), and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3). Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone.

  13. Hypoglossal-Facial Nerve Reconstruction Using a Y-Tube-Conduit Reduces Aberrant Synkinetic Movements of the Orbicularis Oculi and Vibrissal Muscles in Rats

    PubMed Central

    Kaya, Yasemin; Ozsoy, Umut; Turhan, Murat; Angelov, Doychin N.; Sarikcioglu, Levent

    2014-01-01

    The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis) which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex) and vibrissal (whisking) musculature. The abdominal aorta plus its bifurcation was harvested (N = 12) for Y-tube conduits. Animal groups comprised intact animals (Group 1), those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2), and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3). Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone. PMID:25574468

  14. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    PubMed

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P < 0.005; UF 1961 ± 1779mm, P = 0.002; LF 1608 ± 1404mm, P = 0.002; HM 849 ± 642mm, P < 0.001). UF movement was greater than LF movement at all exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P < 0.001). Significant medium to large non-linear relationships were found between facial movement and power output (r 2  = 0.24-0.31), HR (r 2  = 0.26-0.33), [La - ] (r 2  = 0.33-0.44) and RPE (r 2  = 0.38-0.45). The findings demonstrate the potential utility of facial feature tracking as a non-invasive, psychophysiological measure to potentially assess exercise intensity.

  15. Communicative interactions between visually impaired mothers and their sighted children: analysis of gaze, facial expressions, voice and physical contacts.

    PubMed

    Chiesa, S; Galati, D; Schmidt, S

    2015-11-01

    Social and emotional development of infants and young children is largely based on the communicative interaction with their mother, or principal caretaker (Trevarthen ). The main modalities implied in this early communication are voice, facial expressions and gaze (Stern ). This study aims at analysing early mother-child interactions in the case of visually impaired mothers who do not have access to their children's gaze and facial expressions. Spontaneous play interactions between seven visually impaired mothers and their sighted children aged between 6 months and 3 years were filmed. These dyads were compared with a control group of sighted mothers and children analysing four modalities of communication and interaction regulation: gaze, physical contacts, verbal productions and facial expressions. The visually impaired mothers' facial expressions differed from the ones of sighted mothers mainly with respect to forehead movements, leading to an impoverishment of conveyed meaning. Regarding the other communicative modalities, results suggest that visually impaired mothers and their children use compensatory strategies to guaranty harmonic interaction despite the mother's impairment: whereas gaze results the main factor of interaction regulation in sighted dyads, physical contacts and verbal productions assume a prevalent role in dyads with visually impaired mothers. Moreover, visually impaired mother's children seem to be able to differentiate between their mother and sighted interaction partners, adapting differential modes of communication. The results of this study show that, in spite of the obvious differences in the modes of communication, visual impairment does not prevent a harmonious interaction with the child. © 2015 John Wiley & Sons Ltd.

  16. Selective attention modulates early human evoked potentials during emotional face-voice processing.

    PubMed

    Ho, Hao Tam; Schröger, Erich; Kotz, Sonja A

    2015-04-01

    Recent findings on multisensory integration suggest that selective attention influences cross-sensory interactions from an early processing stage. Yet, in the field of emotional face-voice integration, the hypothesis prevails that facial and vocal emotional information interacts preattentively. Using ERPs, we investigated the influence of selective attention on the perception of congruent versus incongruent combinations of neutral and angry facial and vocal expressions. Attention was manipulated via four tasks that directed participants to (i) the facial expression, (ii) the vocal expression, (iii) the emotional congruence between the face and the voice, and (iv) the synchrony between lip movement and speech onset. Our results revealed early interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N1 and P2 amplitude by incongruent emotional face-voice combinations. Although audiovisual emotional interactions within the N1 time window were affected by the attentional manipulations, interactions within the P2 modulation showed no such attentional influence. Thus, we propose that the N1 and P2 are functionally dissociated in terms of emotional face-voice processing and discuss evidence in support of the notion that the N1 is associated with cross-sensory prediction, whereas the P2 relates to the derivation of an emotional percept. Essentially, our findings put the integration of facial and vocal emotional expressions into a new perspective-one that regards the integration process as a composite of multiple, possibly independent subprocesses, some of which are susceptible to attentional modulation, whereas others may be influenced by additional factors.

  17. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    PubMed

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Developmental changes in facial expressions of emotions in the strange situation during the second year of life.

    PubMed

    Izard, Carroll E; Abe, Jo Ann A

    2004-09-01

    Infants' expressions of discrete emotions were coded during the more stressful episodes (4 through 8) of the Strange Situation at 13 and 18 months. The data showed a significant decrease in full-face expressions (more complex configurations of movements) and a significant increase in component expressions (simpler and more constrained patterns of movements). The authors interpreted this trend as a developmental change toward more regulated and less intense emotions. Consistent with this view, the aggregate index of infants' full-face negative emotion expressions, interpreted as reflecting relatively unregulated intense emotions, correlated significantly with maternal ratings of difficult temperament. The authors discuss alternative interpretations of the findings in terms of changes in reactivity/arousability and the emerging capacity for self-regulation. (c) 2004 APA, all rights reserved

  19. Outcome-dependent coactivation of lip and tongue primary somatosensory representation following hypoglossal-facial transfer after peripheral facial palsy.

    PubMed

    Rottler, Philipp; Schroeder, Henry W S; Lotze, Martin

    2014-02-01

    A hypoglossal-facial transfer is a common surgical strategy for reanimating the face after persistent total hemifacial palsy. We were interested in how motor recovery is associated with cortical reorganization of lip and tongue representation in the primary sensorimotor cortex after the transfer. Therefore, we used functional magnetic resonance imaging (fMRI) in 13 patients who underwent a hypoglossal-facial transfer after unilateral peripheral facial palsy. To identify primary motor and somatosensory tongue and lip representation sites, we measured repetitive tongue and lip movements during fMRI. Electromyography (EMG) of the perioral muscles during tongue and lip movements and standardized evaluation of lip elevation served as outcome parameters. We found an association of cortical representation sites in the pre- and postcentral gyrus (decreased distance of lip and tongue representation) with symmetry of recovered lip movements (lip elevation) and coactivation of the lip during voluntary tongue movements (EMG-activity of the lip during tongue movements). Overall, our study shows that hypoglossal-facial transfer resulted in an outcome-dependent cortical reorganization with activation of the cortical tongue area for restituded movement of the lip. Copyright © 2012 Wiley Periodicals, Inc.

  20. Gaze control during face exploration in schizophrenia.

    PubMed

    Delerue, Céline; Laprévote, Vincent; Verfaillie, Karl; Boucart, Muriel

    2010-10-04

    Patients with schizophrenia perform worse than controls on various face perception tasks. Studies monitoring eye movements have shown reduced scan paths and a lower number of fixations to relevant facial features (eyes, nose, mouth) than to other parts. We examine whether attentional control, through instructions, modulates visual scanning in schizophrenia. Visual scan paths were monitored in 20 patients with schizophrenia and 20 controls. Participants started with a "free viewing" task followed by tasks in which they were asked to determine the gender, identify the facial expression, estimate the age, or decide whether the face was known or unknown. Temporal and spatial characteristics of scan paths were compared for each group and task. Consistent with the literature, patients with schizophrenia showed reduced attention to salient facial features in the passive viewing. However, their scan paths did not differ from that of controls when asked to determine the facial expression, the gender, the age or the familiarity of the face. The results are interpreted in terms of attentional control and cognitive flexibility. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    PubMed Central

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  2. Communication Assessment for Individuals with Rett Syndrome: A Systematic Review

    ERIC Educational Resources Information Center

    Sigafoos, Jeff; Kagohara, Debora; van der Meer, Larah; Green, Vanessa A.; O'Reilly, Mark F.; Lancioni, Giulio E.; Lang, Russell; Rispoli, Mandy; Zisimopoulos, Dimitrios

    2011-01-01

    We reviewed studies that aimed to determine whether behaviors, such as body movements, vocalizations, eye gaze, and facial expressions, served a communicative function for individuals with Rett syndrome. A systematic search identified eight studies, which were summarized in terms of (a) participants, (b) assessment targets, (c) assessment…

  3. A Response to Kirsten Fink-Jensen, "Attunement and Bodily Dialogues in Music Education"

    ERIC Educational Resources Information Center

    Leist, Christine Pollard

    2007-01-01

    Kirsten Fink-Jensen offers music educators new insights on lesson planning and engagement with students through careful observation and reflective interpretation of active student involvement in music. She suggests that the phenomenon of musical attunement, including facial expressions, gestures, language, and movements that are articulated…

  4. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  5. Face value: eye movements and the evaluation of facial crowds in social anxiety.

    PubMed

    Lange, Wolf-Gero; Heuer, Kathrin; Langner, Oliver; Keijsers, Ger P J; Becker, Eni S; Rinck, Mike

    2011-09-01

    Scientific evidence is equivocal on whether Social Anxiety Disorder (SAD) is characterized by a biased negative evaluation of (grouped) facial expressions, even though it is assumed that such a bias plays a crucial role in the maintenance of the disorder. To shed light on the underlying mechanisms of face evaluation in social anxiety, the eye movements of 22 highly socially anxious (SAs) and 21 non-anxious controls (NACs) were recorded while they rated the degree of friendliness of neutral-angry and smiling-angry face combinations. While the Crowd Rating Task data showed no significant differences between SAs and NACs, the resultant eye-movement patterns revealed that SAs, compared to NACs, looked away faster when the face first fixated was angry. Additionally, in SAs the proportion of fixated angry faces was significantly higher than for other expressions. Independent of social anxiety, these fixated angry faces were the best predictor of subsequent affect ratings for either group. Angry faces influence attentional processes such as eye movements in SAs and by doing so reflect biased evaluations. As these processes do not correlate with explicit ratings of faces, however, it remains unclear at what point implicit attentional behaviors lead to anxiety-prone behaviors and the maintenance of SAD. The relevance of these findings is discussed in the light of the current theories. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Development of a Low-Cost, Noninvasive, Portable Visual Speech Recognition Program.

    PubMed

    Kohlberg, Gavriel D; Gal, Ya'akov Kobi; Lalwani, Anil K

    2016-09-01

    Loss of speech following tracheostomy and laryngectomy severely limits communication to simple gestures and facial expressions that are largely ineffective. To facilitate communication in these patients, we seek to develop a low-cost, noninvasive, portable, and simple visual speech recognition program (VSRP) to convert articulatory facial movements into speech. A Microsoft Kinect-based VSRP was developed to capture spatial coordinates of lip movements and translate them into speech. The articulatory speech movements associated with 12 sentences were used to train an artificial neural network classifier. The accuracy of the classifier was then evaluated on a separate, previously unseen set of articulatory speech movements. The VSRP was successfully implemented and tested in 5 subjects. It achieved an accuracy rate of 77.2% (65.0%-87.6% for the 5 speakers) on a 12-sentence data set. The mean time to classify an individual sentence was 2.03 milliseconds (1.91-2.16). We have demonstrated the feasibility of a low-cost, noninvasive, portable VSRP based on Kinect to accurately predict speech from articulation movements in clinically trivial time. This VSRP could be used as a novel communication device for aphonic patients. © The Author(s) 2016.

  7. Social anxiety and detection of facial untrustworthiness: Spatio-temporal oculomotor profiles.

    PubMed

    Gutiérrez-García, Aida; Calvo, Manuel G; Eysenck, Michael W

    2018-04-01

    Cognitive models posit that social anxiety is associated with biased attention to and interpretation of ambiguous social cues as threatening. We investigated attentional bias (selective early fixation on the eye region) to account for the tendency to distrust ambiguous smiling faces with non-happy eyes (interpretative bias). Eye movements and fixations were recorded while observers viewed video-clips displaying dynamic facial expressions. Low (LSA) and high (HSA) socially anxious undergraduates with clinical levels of anxiety judged expressers' trustworthiness. Social anxiety was unrelated to trustworthiness ratings for faces with congruent happy eyes and a smile, and for neutral expressions. However, social anxiety was associated with reduced trustworthiness rating for faces with an ambiguous smile, when the eyes slightly changed to neutrality, surprise, fear, or anger. Importantly, HSA observers looked earlier and longer at the eye region, whereas LSA observers preferentially looked at the smiling mouth region. This attentional bias in social anxiety generalizes to all the facial expressions, while the interpretative bias is specific for ambiguous faces. Such biases are adaptive, as they facilitate an early detection of expressive incongruences and the recognition of untrustworthy expressers (e.g., with fake smiles), with no false alarms when judging truly happy or neutral faces. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Human image tracking technique applied to remote collaborative environments

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Suzuki, Gen

    1993-10-01

    To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.

  9. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  10. Reconstruction of facial nerve after radical parotidectomy.

    PubMed

    Renkonen, Suvi; Sayed, Farid; Keski-Säntti, Harri; Ylä-Kotola, Tuija; Bäck, Leif; Suominen, Sinikka; Kanerva, Mervi; Mäkitie, Antti A

    2015-01-01

    Most patients benefitted from immediate facial nerve grafting after radical parotidectomy. Even weak movement is valuable and can be augmented with secondary static operations. Post-operative radiotherapy does not seem to affect the final outcome of facial function. During radical parotidectomy, the sacrifice of the facial nerve results in severe disfigurement of the face. Data on the principles and outcome of facial nerve reconstruction and reanimation after radical parotidectomy are limited and no consensus exists on the best practice. This study retrospectively reviewed all patients having undergone radical parotidectomy and immediate facial nerve reconstruction with a free, non-vascularized nerve graft at the Helsinki University Hospital, Helsinki, Finland during the years 1990-2010. There were 31 patients (18 male; mean age = 54.7 years; range = 30-82) and 23 of them had a sufficient follow-up time. Facial nerve function recovery was seen in 18 (78%) of the 23 patients with a minimum of 2-year follow-up and adequate reporting available. Only slight facial movement was observed in five (22%), moderate or good movement in nine (39%), and excellent movement in four (17%) patients. Twenty-two (74%) patients received post-operative radiotherapy and 16 (70%) of them had some recovery of facial nerve function. Nineteen (61%) patients needed secondary static reanimation of the face.

  11. Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition

    PubMed Central

    Peng, Min; Wang, Chongyang; Chen, Tong; Liu, Guangyuan; Fu, Xiaolan

    2017-01-01

    Facial micro-expression is a brief involuntary facial movement and can reveal the genuine emotion that people try to conceal. Traditional methods of spontaneous micro-expression recognition rely excessively on sophisticated hand-crafted feature design and the recognition rate is not high enough for its practical application. In this paper, we proposed a Dual Temporal Scale Convolutional Neural Network (DTSCNN) for spontaneous micro-expressions recognition. The DTSCNN is a two-stream network. Different of stream of DTSCNN is used to adapt to different frame rate of micro-expression video clips. Each stream of DSTCNN consists of independent shallow network for avoiding the overfitting problem. Meanwhile, we fed the networks with optical-flow sequences to ensure that the shallow networks can further acquire higher-level features. Experimental results on spontaneous micro-expression databases (CASME I/II) showed that our method can achieve a recognition rate almost 10% higher than what some state-of-the-art method can achieve. PMID:29081753

  12. Sad people are more accurate at expression identification with a smaller own-ethnicity bias than happy people.

    PubMed

    Hills, Peter J; Hill, Dominic M

    2017-07-12

    Sad individuals perform more accurately at face identity recognition (Hills, Werno, & Lewis, 2011), possibly because they scan more of the face during encoding. During expression identification tasks, sad individuals do not fixate on the eyes as much as happier individuals (Wu, Pu, Allen, & Pauli, 2012). Fixating on features other than the eyes leads to a reduced own-ethnicity bias (Hills & Lewis, 2006). This background indicates that sad individuals would not view the eyes as much as happy individuals and this would result in improved expression recognition and a reduced own-ethnicity bias. This prediction was tested using an expression identification task, with eye tracking. We demonstrate that sad-induced participants show enhanced expression recognition and a reduced own-ethnicity bias than happy-induced participants due to scanning more facial features. We conclude that mood affects eye movements and face encoding by causing a wider sampling strategy and deeper encoding of facial features diagnostic for expression identification.

  13. Visual attention mechanisms in happiness versus trustworthiness processing of facial expressions.

    PubMed

    Calvo, Manuel G; Krumhuber, Eva G; Fernández-Martín, Andrés

    2018-03-01

    A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort.

  14. Emotion Recognition in Face and Body Motion in Bulimia Nervosa.

    PubMed

    Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate

    2017-11-01

    Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  15. Derivation of simple rules for complex flow vector fields on the lower part of the human face for robot face design.

    PubMed

    Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru

    2017-11-27

    It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.

  16. Kinesics and Cross-Cultural Understanding. Language in Education: Theory and Practice, No. 7.

    ERIC Educational Resources Information Center

    Morain, Genelle G.

    Body language, also called kinesics, is the discipline concerned with the study of all bodily motions that are communicative. An understanding of kinesics across cultures necessitates a close look at posture, movement, facial expression, eye management, gestures, and proxemics (distancing). The popularity of one posture over another and the…

  17. Effects of Mirror Therapy Using a Tablet PC on Central Facial Paresis in Stroke Patients.

    PubMed

    Kang, Jung-A; Chun, Min Ho; Choi, Su Jin; Chang, Min Cheol; Yi, You Gyoung

    2017-06-01

    To investigate the effects of mirror therapy using a tablet PC for post-stroke central facial paresis. A prospective, randomized controlled study was performed. Twenty-one post-stroke patients were enrolled. All patients performed 15 minutes of orofacial exercise twice daily for 14 days. The mirror group (n=10) underwent mirror therapy using a tablet PC while exercising, whereas the control group (n=11) did not. All patients were evaluated using the Regional House-Brackmann Grading Scale (R-HBGS), and the length between the corner of the mouth and the ipsilateral earlobe during rest and smiling before and after therapy were measured bilaterally. We calculated facial movement by subtracting the smile length from resting length. Differences and ratios between bilateral sides of facial movement were evaluated as the final outcome measure. Baseline characteristics were similar for the two groups. There were no differences in the scores for the basal Modified Barthel Index, the Korean version of Mini-Mental State Examination, National Institutes of Health Stroke Scale, R-HBGS, and bilateral differences and ratios of facial movements. The R-HBGS as well as the bilateral differences and ratios of facial movement showed significant improvement after therapy in both groups. The degree of improvement of facial movement was significantly larger in the mirror group than in the control group. Mirror therapy using a tablet PC might be an effective tool for treating central facial paresis after stroke.

  18. Effects of Mirror Therapy Using a Tablet PC on Central Facial Paresis in Stroke Patients

    PubMed Central

    2017-01-01

    Objective To investigate the effects of mirror therapy using a tablet PC for post-stroke central facial paresis. Methods A prospective, randomized controlled study was performed. Twenty-one post-stroke patients were enrolled. All patients performed 15 minutes of orofacial exercise twice daily for 14 days. The mirror group (n=10) underwent mirror therapy using a tablet PC while exercising, whereas the control group (n=11) did not. All patients were evaluated using the Regional House–Brackmann Grading Scale (R-HBGS), and the length between the corner of the mouth and the ipsilateral earlobe during rest and smiling before and after therapy were measured bilaterally. We calculated facial movement by subtracting the smile length from resting length. Differences and ratios between bilateral sides of facial movement were evaluated as the final outcome measure. Results Baseline characteristics were similar for the two groups. There were no differences in the scores for the basal Modified Barthel Index, the Korean version of Mini-Mental State Examination, National Institutes of Health Stroke Scale, R-HBGS, and bilateral differences and ratios of facial movements. The R-HBGS as well as the bilateral differences and ratios of facial movement showed significant improvement after therapy in both groups. The degree of improvement of facial movement was significantly larger in the mirror group than in the control group. Conclusion Mirror therapy using a tablet PC might be an effective tool for treating central facial paresis after stroke. PMID:28758071

  19. Optogenetic probing of nerve and muscle function after facial nerve lesion in the mouse whisker system

    NASA Astrophysics Data System (ADS)

    Bandi, Akhil; Vajtay, Thomas J.; Upadhyay, Aman; Yiantsos, S. Olga; Lee, Christian R.; Margolis, David J.

    2018-02-01

    Optogenetic modulation of neural circuits has opened new avenues into neuroscience research, allowing the control of cellular activity of genetically specified cell types. Optogenetics is still underdeveloped in the peripheral nervous system, yet there are many applications related to sensorimotor function, pain and nerve injury that would be of great benefit. We recently established a method for non-invasive, transdermal optogenetic stimulation of the facial muscles that control whisker movements in mice (Park et al., 2016, eLife, e14140)1. Here we present results comparing the effects of optogenetic stimulation of whisker movements in mice that express channelrhodopsin-2 (ChR2) selectively in either the facial motor nerve (ChAT-ChR2 mice) or muscle (Emx1-ChR2 or ACTA1-ChR2 mice). We tracked changes in nerve and muscle function before and up to 14 days after nerve transection. Optogenetic 460 nm transdermal stimulation of the distal cut nerve showed that nerve degeneration progresses rapidly over 24 hours. In contrast, the whisker movements evoked by optogenetic muscle stimulation were up-regulated after denervation, including increased maximum protraction amplitude, increased sensitivity to low-intensity stimuli, and more sustained muscle contractions (reduced adaptation). Our results indicate that peripheral optogenetic stimulation is a promising technique for probing the timecourse of functional changes of both nerve and muscle, and holds potential for restoring movement after paralysis induced by nerve damage or motoneuron degeneration.

  20. Do surrounding figures' emotions affect judgment of the target figure's emotion? Comparing the eye-movement patterns of European Canadians, Asian Canadians, Asian international students, and Japanese.

    PubMed

    Masuda, Takahiko; Wang, Huaitang; Ishii, Keiko; Ito, Kenichi

    2012-01-01

    Although the effect of context on cognition is observable across cultures, preliminary findings suggest that when asked to judge the emotion of a target model's facial expression, East Asians are more likely than their North American counterparts to be influenced by the facial expressions of surrounding others (Masuda et al., 2008b). Cultural psychologists discuss this cultural variation in affective emotional context under the rubric of holistic vs. analytic thought, independent vs. interdependent self-construals, and socially disengaged vs. socially engaged emotion (e.g., Mesquita and Markus, 2004). We demonstrate that this effect is generalizable even when (1) photos of real facial emotions are used, (2) the saliency of the target model's emotion is attenuated, and (3) a specific amount of observation time is allocated. We further demonstrate that the experience plays an important role in producing cultural variations in the affective context effect on cognition.

  1. Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions

    NASA Astrophysics Data System (ADS)

    Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.

    2016-08-01

    This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.

  2. Useful Method for Intraoperative Monitoring of Facial Nerve in a Scarred Bed.

    PubMed

    Aysal, Bilge Kagan; Yapici, Abdulkerim; Bayram, Yalcin; Zor, Fatih

    2016-10-01

    Facial nerve is the main cranial nerve for the innervation of facial expression muscles. Main trunk of facial nerve passes approximately 1 to 2 cm deep to tragal pointer. In some patients, where a patient has multiple operations, fibrosis due to previous operations may change the natural anatomy and direction of the branches of facial nerve. A 22-year-old male patient had 2 operations for mandibular reconstruction after gunshot wound. During the second operation, there was a possible injury to the marginal mandibular nerve and a nerve stimulator was used intraoperatively to monitor the nerve at the tragal pointer because the excitability of the distal segments remains intact for 24 to 48 hours after nerve injuries. Thus, using a nerve stimulator at the operational site may lead to false-positive muscle movements in case of injuries. Using the nerve stimulator to stimulate the main trunk at the tragal point may help to distinguish the presence of possible injuries. A reliable method for intraoperative facial nerve monitoring in a scarred operational site was introduced in this letter.

  3. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy.

    PubMed

    Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura

    2016-03-26

    The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.

  4. Human and animal sounds influence recognition of body language.

    PubMed

    Van den Stock, Jan; Grèzes, Julie; de Gelder, Beatrice

    2008-11-25

    In naturalistic settings emotional events have multiple correlates and are simultaneously perceived by several sensory systems. Recent studies have shown that recognition of facial expressions is biased towards the emotion expressed by a simultaneously presented emotional expression in the voice even if attention is directed to the face only. So far, no study examined whether this phenomenon also applies to whole body expressions, although there is no obvious reason why this crossmodal influence would be specific for faces. Here we investigated whether perception of emotions expressed in whole body movements is influenced by affective information provided by human and by animal vocalizations. Participants were instructed to attend to the action displayed by the body and to categorize the expressed emotion. The results indicate that recognition of body language is biased towards the emotion expressed by the simultaneously presented auditory information, whether it consist of human or of animal sounds. Our results show that a crossmodal influence from auditory to visual emotional information obtains for whole body video images with the facial expression blanked and includes human as well as animal sounds.

  5. Preserved search asymmetry in the detection of fearful faces among neutral faces in individuals with Williams syndrome revealed by measurement of both manual responses and eye tracking.

    PubMed

    Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho

    2017-01-01

    Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding is discussed with reference to the amygdala account explaining hypersociability in individuals with WS.

  6. The effect of age and sex on facial mimicry: a three-dimensional study in healthy adults.

    PubMed

    Sforza, C; Mapelli, A; Galante, D; Moriconi, S; Ibba, T M; Ferraro, L; Ferrario, V F

    2010-10-01

    To assess sex- and age-related characteristics in standardized facial movements, 40 healthy adults (20 men, 20 women; aged 20-50 years) performed seven standardized facial movements (maximum smile; free smile; "surprise" with closed mouth; "surprise" with open mouth; eye closure; right- and left-side eye closures). The three-dimensional coordinates of 21 soft tissue facial landmarks were recorded by a motion analyser, their movements computed, and asymmetry indices calculated. Within each movement, total facial mobility was independent from sex and age (analysis of variance, p>0.05). Asymmetry indices of the eyes and mouth were similar in both sexes (p>0.05). Age significantly influenced eye and mouth asymmetries of the right-side eye closure, and eye asymmetry of the surprise movement. On average, the asymmetry indices of the symmetric movements were always lower than 8%, and most did not deviate from the expected value of 0 (Student's t). Larger asymmetries were found for the asymmetric eye closures (eyes, up to 50%, p<0.05; mouth, up to 30%, p<0.05 only in the 20-30-year-old subjects). In conclusion, sex and age had a limited influence on total facial motion and asymmetry in normal adult men and women. Copyright © 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Cultural similarities and differences in the perception of emotional valence and intensity: a comparison of Americans and Hong Kong Chinese.

    PubMed

    Zhu, Zhuoying; Ho, Samuel M Y; Bonanno, George A

    2013-01-01

    Despite being challenged for their ecological validity, studies of emotion perception have often relied on static, posed expressions. One of the key reasons is that dynamic, spontaneous expressions are difficult to control because of the existence of display rules and frequent co-occurrence of non-emotion related facial movements. The present study investigated cross-cultural patterns in the perception of emotion using an expressive regulation paradigm for generating facial expressions. The paradigm largely balances out the competing concerns for ecological and internal validity. Americans and Hong Kong Chinese (expressors) were presented with positively and negatively valenced pictures and were asked to enhance, suppress, or naturally display their facial expressions according to their subjective emotions. Videos of naturalistic and dynamic expressions of emotions were rated by Americans and Hong Kong Chinese (judges) for valence and intensity. The 2 cultures agreed on the valence and relative intensity of emotion expressions, but cultural differences were observed in absolute intensity ratings. The differences varied between positive and negative expressions. With positive expressions, ratings were higher when there was a cultural match between the expressor and the judge and when the expression was enhanced by the expressor. With negative expressions, Chinese judges gave higher ratings than their American counterparts for Chinese expressions under all 3 expressive conditions, and the discrepancy increased with expression intensity; no cultural differences were observed when American expressions were judged. The results were discussed with respect to the "decoding rules" and "same-culture advantage" approaches of emotion perception and a negativity bias in the Chinese collective culture.

  8. Physiotherapy in patients with facial nerve paresis: description of outcomes.

    PubMed

    Beurskens, Carien H G; Heymans, Peter G

    2004-01-01

    The purpose of this study was to describe changes and stabilities of long-term sequelae of facial paresis in outpatients receiving mime therapy, a form of physiotherapy. Archived data of 155 patients with peripheral facial nerve paresis were analyzed. Main outcome measures were (1) impairments: facial symmetry in rest and during movements and synkineses; (2) disabilities: eating, drinking, and speaking; and (3) quality of life. Symmetry at rest improved significantly; the average severity of the asymmetry in all movements decreased. The number of synkineses increased for 3 out of 8 movements; however, the group average severities decreased for 6 movements; substantially fewer patients reported disabilities in eating, drinking, and speaking; and quality of life improved significantly. During a period of approximately 3 months, significant changes in many aspects of facial functioning were observed, the relative position of patients remaining stable over time. Observed changes occurred while the patients participated in a program for facial rehabilitation (mime therapy), replicating the randomized controlled trial-proven benefits of mime therapy in a more varied sample of outpatients.

  9. Preliminary Analysis of the 3-Dimensional Morphology of the Upper Lip Configuration at the Completion of Facial Expressions in Healthy Japanese Young Adults and Patients With Cleft Lip.

    PubMed

    Matsumoto, Kouzou; Nozoe, Etsuro; Okawachi, Takako; Ishihata, Kiyohide; Nishinara, Kazuhide; Nakamura, Norifumi

    2016-09-01

    To develop criteria for the analysis of upper lip configuration of patients with cleft lip while they produce various facial expressions by comparing the 3-dimensional (3D) facial morphology of healthy Japanese adults and patients with cleft lip. Twenty healthy adult Japanese volunteers (10 men, 10 women, controls) without any observed facial abnormalities and 8 patients (4 men, 4 women) with unilateral cleft lip and palate who had undergone secondary lip and nose repair were recruited for this study. Facial expressions (resting, smiling, and blowing out a candle) were recorded with 2 Artec MHT 3D scanners, and images were superimposed by aligning the T-zone of the faces. The positions of 14 specific points were set on each face, and the positional changes of specific points and symmetry of the upper lip cross-section were analyzed. Furthermore, the configuration observed in healthy controls was compared with that in patients with cleft lip before and after surgery. The mean absolute values for T-zone overlap ranged from 0.04 to 0.15 mm. Positional changes of specific points in the controls showed that the nose and lip moved backward and laterally upward when smiling and the lips moved forward and downward medially when blowing out a candle; these movements were bilaterally symmetrical in men and women. In patients with cleft lip, the positional changes of the specific points were minor compared with those of the controls while smiling and blowing out a candle. The left-versus-right symmetry of the upper lip cross-section exceeded 1.0 mm in patients with cleft lip, which was markedly higher than that in the controls (0.17 to 0.91 mm). These left-versus-right differences during facial expressions were decreased after surgery. By comparing healthy individuals with patients with cleft lip, this study has laid the basis for determining control values for facial expressions. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  10. A Multidisciplinary Approach for Rehabilitation of Enucleated Sockets: Ocular Implants with Custom Ocular Prosthesis.

    PubMed

    Choudhury, Minati; Banu, Fathima; Natarajan, Shanmuganathan; Kumar, Anand; Tv, Padmanabhan

    2018-02-16

    Interdisciplinary prosthodontics goes beyond our imagination into fields that have a direct effect on our total body health and quality of life. Removal of an eye has a detrimental effect on the psychology of the patient. Enucleation involves removal of the eyeball proper and leads to an enophthalmic socket with a shrunken eye, which has a crippling effect on patient's emotional and social life. Custom-made eye prosthesis simulates the characteristics of the companion eye and helps in restoring the normal facial appearance. Restoration of saccadic eye movements occurring during speech is desirable because this greatly contributes to a normal facial expression. This can be achieved by an orbital implant, which helps in orbital volume replacement and restoration of prosthesis movement and comfort. This article describes prosthodontic rehabilitation of enucleated eye sockets with orbital implants for two patients.

  11. [Neurological disease and facial recognition].

    PubMed

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  12. A Courseware to Script Animated Pedagogical Agents in Instructional Material for Elementary Students in English Education

    ERIC Educational Resources Information Center

    Hong, Zeng-Wei; Chen, Yen-Lin; Lan, Chien-Ho

    2014-01-01

    Animated agents are virtual characters who demonstrate facial expressions, gestures, movements, and speech to facilitate students' engagement in the learning environment. Our research developed a courseware that supports a XML-based markup language and an authoring tool for teachers to script animated pedagogical agents in teaching materials. The…

  13. Cardiac and Behavioral Evidence for Emotional Influences on Attention in 7-Month-Old Infants

    ERIC Educational Resources Information Center

    Leppanen, Jukka; Peltola, Mikko J.; Mantymaa, Mirjami; Koivuluoma, Mikko; Salminen, Anni; Puura, Kaija

    2010-01-01

    To examine the ontogeny of emotion-attention interactions, we investigated whether infants exhibit adult-like biases in automatic and voluntary attentional processes towards fearful facial expressions. Heart rate and saccadic eye movements were measured from 7-month-old infants (n = 42) while viewing non-face control stimuli, and neutral, happy,…

  14. Myomodulation with Injectable Fillers: An Innovative Approach to Addressing Facial Muscle Movement.

    PubMed

    de Maio, Maurício

    2018-06-01

    Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  15. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  16. Social Experience Does Not Abolish Cultural Diversity in Eye Movements

    PubMed Central

    Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto

    2011-01-01

    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626

  17. Diagnostic Features of Emotional Expressions Are Processed Preferentially

    PubMed Central

    Scheller, Elisa; Büchel, Christian; Gamer, Matthias

    2012-01-01

    Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders. PMID:22848607

  18. Diagnostic features of emotional expressions are processed preferentially.

    PubMed

    Scheller, Elisa; Büchel, Christian; Gamer, Matthias

    2012-01-01

    Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.

  19. Quantification of functional results after facial burns by the faciometer.

    PubMed

    Koller, R; Kargül, G; Giovanoli, P; Meissl, G; Frey, M

    2000-12-01

    In the present study the faciometer(R) is introduced in order to quantify the ranges of mimic movements observed after surgical treatment of facial burns. This instrument which consists of calipers and an electronic display was introduced in 1994 in order to measure the extent of facial palsy during reconstructive procedures. The study group consisted of 23 patients, who had been operated on for facial burns. The distances between standardised stable and moving points in the face were determined after mimic movements such as lifting of the eyebrows, maximum showing of the teeth and pursing of the lips. These distances were expressed as a percentage of the distance at rest. For comparison the scars were classified according to the Vancouver Scar Scale. In all patients the functional results after burn trauma in the face and, in some cases, asymmetries at rest could be objectified. Depending upon the severity of scarring, the distance between tragus and mouth was shortened between 0 and 19% after maximal showing of the teeth. In general the mouth region showed more functional deficits than the forehead. Comparing different manners of treatment, it could be objectively demonstrated that the results after deep burns requiring skin grafts were worse than those observed after more superficial lesions and other methods of coverage. The application of keratinocytes to close the burn showed highly variable results.

  20. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  1. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy

    PubMed Central

    Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura

    2016-01-01

    The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy. PMID:27023561

  2. Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces

    PubMed Central

    Chaby, Laurence; Hupont, Isabelle; Avril, Marie; Luherne-du Boullay, Viviane; Chetouani, Mohamed

    2017-01-01

    The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for “joy” and “disgust,” and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive “social signature” of emotional identification in aging. Younger adults, however, were more dispersed in terms of gaze behavior and used a more exploratory-gaze strategy, consisting in repeatedly visiting both facial areas. PMID:28450841

  3. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  4. Looking, Smiling, Laughing, and Moving in Restaurants: Sex and Age Differences.

    ERIC Educational Resources Information Center

    Adams, Robert M.; Kirkevold, Barbara

    Body movements and facial expressions of males and females in a restaurant setting were examined, with the goal of providing differences in frequency as a function of age and sex. The subjects (N-197 males and N=131 females) were seated in three Seattle fast food restaurants and were selected on a semi-random basis and then observed for three…

  5. Reading Faces: Differential Lateral Gaze Bias in Processing Canine and Human Facial Expressions in Dogs and 4-Year-Old Children

    PubMed Central

    Racca, Anaïs; Guo, Kun; Meints, Kerstin; Mills, Daniel S.

    2012-01-01

    Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions. PMID:22558335

  6. The detrimental effects of atypical nonverbal behavior on older adults' first impressions of individuals with Parkinson's disease.

    PubMed

    Hemmesch, Amanda R

    2014-09-01

    After viewing short video clips of individuals with Parkinson's disease (PD) who varied in the symptoms of facial masking (reduced expressivity) and abnormal bodily movement (ABM: including tremor and related movement disorders), older adult observers provided their first impressions of targets' social positivity. Impressions of targets with higher masking or ABM were more negative than impressions of targets with lower masking or ABM. Furthermore, masking was more detrimental for impressions of women and when observers considered emotional relationship goals, whereas ABM was more detrimental for instrumental relationship goals. This study demonstrated the stigmatizing effects of both reduced and excessive movement. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2009-12-01

    Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.

  8. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  9. When your face describes your memories: facial expressions during retrieval of autobiographical memories.

    PubMed

    El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis

    2018-05-11

    Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.

  10. Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions.

    PubMed

    Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori

    2013-11-01

    This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.

  11. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    NASA Astrophysics Data System (ADS)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  12. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children

    PubMed Central

    Wang, Yifang; Zhou, Wei; Cheng, Yanhong; Bian, Xiaoying

    2017-01-01

    This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices. PMID:29312104

  13. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  14. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    NASA Astrophysics Data System (ADS)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  15. Processing of subliminal facial expressions of emotion: a behavioral and fMRI study.

    PubMed

    Prochnow, D; Kossack, H; Brunheim, S; Müller, K; Wittsack, H-J; Markowitsch, H-J; Seitz, R J

    2013-01-01

    The recognition of emotional facial expressions is an important means to adjust behavior in social interactions. As facial expressions widely differ in their duration and degree of expressiveness, they often manifest with short and transient expressions below the level of awareness. In this combined behavioral and fMRI study, we aimed at examining whether or not consciously accessible (subliminal) emotional facial expressions influence empathic judgments and which brain activations are related to it. We hypothesized that subliminal facial expressions of emotions masked with neutral expressions of the same faces induce an empathic processing similar to consciously accessible (supraliminal) facial expressions. Our behavioral data in 23 healthy subjects showed that subliminal emotional facial expressions of 40 ms duration affect the judgments of the subsequent neutral facial expressions. In the fMRI study in 12 healthy subjects it was found that both, supra- and subliminal emotional facial expressions shared a widespread network of brain areas including the fusiform gyrus, the temporo-parietal junction, and the inferior, dorsolateral, and medial frontal cortex. Compared with subliminal facial expressions, supraliminal facial expressions led to a greater activation of left occipital and fusiform face areas. We conclude that masked subliminal emotional information is suited to trigger processing in brain areas which have been implicated in empathy and, thereby in social encounters.

  16. Effect of Weakening of Ipsilateral Depressor Anguli Oris on Smile Symmetry in Postparalysis Facial Palsy.

    PubMed

    Jowett, Nate; Malka, Ronit; Hadlock, Tessa A

    2017-01-01

    Aberrant depressor anguli oris (DAO) activity may arise after recovery from acute facial paralysis and restrict movement of the oral commissure. To quantify the degree to which DAO inhibition affects smile dynamics and perceived emotional state. In this prospective, pretest-posttest study performed at an academic tertiary referral hospital, patients with unilateral postparalysis facial palsy were studied from January 16 through April 30, 2016. Local anesthetic injection into the ipsilateral DAO. Healthy- and paretic-side commissure displacements from the midline lower vermillion border referenced to the horizontal plane were calculated from random-ordered photographs of full-effort smile before and after injection, and random-ordered hemifacial photographs of the paretic side were assessed as expressing positive, negative, or indiscernible emotion. Twenty patients were identified as having unilateral postparalysis facial palsy with marked synkinesis of the ipsilateral DAO. Patient mean age was 46 years (range, 24-67 years), with a male to female ratio of 1:3. Mean paretic-side commissure displacement increased from 27.45 mm at 21.65° above the horizontal plane to 29.35 mm at 23.58° after DAO weakening (mean difference, 1.90 mm; 95% CI, 1.26-2.54 mm; and 1.93°; 95% CI, 0.34°-3.51°; P < .001 and P = .20, respectively). Symmetry of excursion between sides improved by 2.00 mm (95% CI, 1.16-2.83 mm; P < .001) and 2.71° (95% CI, 1.38°-4.03°; P < .001). At baseline, observers assessed 7 of 20 paretic hemifaces (35%) as expressing positive emotion; this proportion increased to 13 of 20 (65%) after DAO weakening (P = .03). Ipsilateral DAO weakening results in significant improvements in smile dynamics and perceived expression of positive emotion on the paretic hemiface in postparalysis facial palsy. A trial of DAO weakening should be offered to patients with this disfiguring complication of Bell palsy and similar facial nerve insults. 3.

  17. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  18. Facial expressions and pair bonds in hylobatids.

    PubMed

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony, or rates of behavioral synchrony. We found that FES was the strongest measure of hylobatid expressiveness and was significantly positively correlated with higher sociality index scores; however, FES showed no significant correlation with behavioral synchrony. No noticeable differences between pairs were found regarding rates of behavioral or territorial synchrony. Facial repertoire sizes and FES were not significantly correlated with rates of behavioral synchrony or territorial synchrony. Our study confirms an important role of facial expressions in maintaining pair bonds and coordinating activities in hylobatids. Data support the hypothesis that facial expressions and sociality have been linked in hylobatid and primate evolution. It is possible that larger facial repertoires may have contributed to strengthening pair bonds in primates, because richer facial repertoires provide more opportunities for FES which can effectively increase the "understanding" between partners through smoother coordination of interaction patterns. This study supports the social complexity hypothesis as the driving force for the evolution of complex communication signaling. © 2018 Wiley Periodicals, Inc.

  19. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates

    PubMed Central

    Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui

    2014-01-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898

  20. Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.

    PubMed

    Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.

  1. A System for Studying Facial Nerve Function in Rats through Simultaneous Bilateral Monitoring of Eyelid and Whisker Movements

    PubMed Central

    Heaton, James T.; Kowaleski, Jeffrey M.; Bermejo, Roberto; Zeigler, H. Philip; Ahlgren, David J.; Hadlock, Tessa A.

    2008-01-01

    The occurrence of inappropriate co-contraction of facially innervated muscles in humans (synkinesis) is a common sequela of facial nerve injury and recovery. We have developed a system for studying facial nerve function and synkinesis in restrained rats using non-contact opto-electronic techniques that enable simultaneous bilateral monitoring of eyelid and whisker movements. Whisking is monitored in high spatio-temporal resolution using laser micrometers, and eyelid movements are detected using infrared diode and phototransistor pairs that respond to the increased reflection when the eyelids cover the cornea. To validate the system, eight rats were tested with multiple five-minute sessions that included corneal air puffs to elicit blink and scented air flows to elicit robust whisking. Four rats then received unilateral facial nerve section and were tested at weeks 3–6. Whisking and eye blink behavior occurred both spontaneously and under stimulus control, with no detectable difference from published whisking data. Proximal facial nerve section caused an immediate ipsilateral loss of whisking and eye blink response, but some ocular closures emerged due to retractor bulbi muscle function. The independence observed between whisker and eyelid control indicates that this system may provide a powerful tool for identifying abnormal co-activation of facial zones resulting from aberrant axonal regeneration. PMID:18442856

  2. A System for Delivering Mechanical Stimulation and Robot-Assisted Therapy to the Rat Whisker Pad during Facial Nerve Regeneration

    PubMed Central

    Heaton, James T.; Knox, Christopher; Malo, Juan; Kobler, James B.; Hadlock, Tessa A.

    2013-01-01

    Functional recovery is typically poor after facial nerve transection and surgical repair. In rats, whisking amplitude remains greatly diminished after facial nerve regeneration, but can recover more completely if the whiskers are periodically mechanically stimulated during recovery. Here we present a robotic “whisk assist” system for mechanically driving whisker movement after facial nerve injury. Movement patterns were either pre-programmed to reflect natural amplitudes and frequencies, or movements of the contralateral (healthy) side of the face were detected and used to control real-time mirror-like motion on the denervated side. In a pilot study, twenty rats were divided into nine groups and administered one of eight different whisk assist driving patterns (or control) for 5–20 minutes, five days per week, across eight weeks of recovery after unilateral facial nerve cut and suture repair. All rats tolerated the mechanical stimulation well. Seven of the eight treatment groups recovered average whisking amplitudes that exceeded controls, although small group sizes precluded statistical confirmation of group differences. The potential to substantially improve facial nerve recovery through mechanical stimulation has important clinical implications, and we have developed a system to control the pattern and dose of stimulation in the rat facial nerve model. PMID:23475376

  3. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    PubMed

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  4. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity

    PubMed Central

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882

  5. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits

    PubMed Central

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081

  6. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-07-02

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.

  7. Use of Objective Metrics in Dynamic Facial Reanimation: A Systematic Review.

    PubMed

    Revenaugh, Peter C; Smith, Ryan M; Plitt, Max A; Ishii, Lisa; Boahene, Kofi; Byrne, Patrick J

    2018-06-21

    Facial nerve deficits cause significant functional and social consequences for those affected. Existing techniques for dynamic restoration of facial nerve function are imperfect and result in a wide variety of outcomes. Currently, there is no standard objective instrument for facial movement as it relates to restorative techniques. To determine what objective instruments of midface movement are used in outcome measurements for patients treated with dynamic methods for facial paralysis. Database searches from January 1970 to June 2017 were performed in PubMed, Embase, Cochrane Library, Web of Science, and Scopus. Only English-language articles on studies performed in humans were considered. The search terms used were ("Surgical Flaps"[Mesh] OR "Nerve Transfer"[Mesh] OR "nerve graft" OR "nerve grafts") AND (face [mh] OR facial paralysis [mh]) AND (innervation [sh]) OR ("Face"[Mesh] OR facial paralysis [mh]) AND (reanimation [tiab]). Two independent reviewers evaluated the titles and abstracts of all articles and included those that reported objective outcomes of a surgical technique in at least 2 patients. The presence or absence of an objective instrument for evaluating outcomes of midface reanimation. Additional outcome measures were reproducibility of the test, reporting of symmetry, measurement of multiple variables, and test validity. Of 241 articles describing dynamic facial reanimation techniques, 49 (20.3%) reported objective outcome measures for 1898 patients. Of those articles reporting objective measures, there were 29 different instruments, only 3 of which reported all outcome measures. Although instruments are available to objectively measure facial movement after reanimation techniques, most studies do not report objective outcomes. Of objective facial reanimation instruments, few are reproducible and able to measure symmetry and multiple data points. To accurately compare objective outcomes in facial reanimation, a reproducible, objective, and universally applied instrument is needed.

  8. Comparison of hemihypoglossal nerve versus masseteric nerve transpositions in the rehabilitation of short-term facial paralysis using the Facial Clima evaluating system.

    PubMed

    Hontanilla, Bernardo; Marré, Diego

    2012-11-01

    Masseteric and hypoglossal nerve transfers are reliable alternatives for reanimating short-term facial paralysis. To date, few studies exist in the literature comparing these techniques. This work presents a quantitative comparison of masseter-facial transposition versus hemihypoglossal facial transposition with a nerve graft using the Facial Clima system. Forty-six patients with complete unilateral facial paralysis underwent reanimation with either hemihypoglossal transposition with a nerve graft (group I, n = 25) or direct masseteric-facial coaptation (group II, n = 21). Commissural displacement and commissural contraction velocity were measured using the Facial Clima system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using a paired sample t test. Then, mean percentages of recovery of both parameters were compared between the groups using an independent sample t test. Onset of movement was also compared between the groups. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I but not in group II. Mean percentage of recovery of both parameters did not differ between the groups. Patients in group II showed a significantly faster onset of movement compared with those in group I (62 ± 4.6 days versus 136 ± 7.4 days, p = 0.013). Reanimation of short-term facial paralysis can be satisfactorily addressed by means of either hemihypoglossal transposition with a nerve graft or direct masseteric-facial coaptation. However, with the latter, better symmetry and a faster onset of movement are observed. In addition, masseteric nerve transfer avoids morbidity from nerve graft harvesting. Therapeutic, III.

  9. An embodiment effect in computer-based learning with animated pedagogical agents.

    PubMed

    Mayer, Richard E; DaPra, C Scott

    2012-09-01

    How do social cues such as gesturing, facial expression, eye gaze, and human-like movement affect multimedia learning with onscreen agents? To help address this question, students were asked to twice view a 4-min narrated presentation on how solar cells work in which the screen showed an animated pedagogical agent standing to the left of 11 successive slides. Across three experiments, learners performed better on a transfer test when a human-voiced agent displayed human-like gestures, facial expression, eye gaze, and body movement than when the agent did not, yielding an embodiment effect. In Experiment 2 the embodiment effect was found when the agent spoke in a human voice but not in a machine voice. In Experiment 3, the embodiment effect was found both when students were told the onscreen agent was consistent with their choice of agent characteristics and when inconsistent. Students who viewed a highly embodied agent also rated the social attributes of the agent more positively than did students who viewed a nongesturing agent. The results are explained by social agency theory, in which social cues in a multimedia message prime a feeling of social partnership in the learner, which leads to deeper cognitive processing during learning, and results in a more meaningful learning outcome as reflected in transfer test performance.

  10. Posttraumatic mid-facial pain and Meige's syndrome relieved by pressure on the nasion and retrocollis.

    PubMed

    Jacome, Daniel E

    2010-07-01

    A 42-year-old farmer developed persistent mid-facial segmental pain and Meige's syndrome several months after suffering facial trauma and a fracture of the nose. He was not afflicted by systemic ailments, had no family history of movement disorder and no history of exposure to neuroleptic drugs. He was capable of suppressing his facial pain by performing a ritual that included forcefully tilting his head backwards, lowering of his eyelids and applying strong pressure to his nasion. Exceptionally dystonic movements and elaborate behavioral rituals may serve as a mechanism of pain suppression. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  12. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    PubMed Central

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque

    2018-01-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845

  13. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    PubMed

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  14. Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study

    PubMed Central

    Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  15. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    PubMed Central

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s≥.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  16. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  17. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    PubMed

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  18. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society.

    PubMed

    Fattah, Adel Y; Gavilan, Javier; Hadlock, Tessa A; Marcus, Jeffrey R; Marres, Henri; Nduka, Charles; Slattery, William H; Snyder-Warwick, Alison K

    2014-10-01

    Facial palsy manifests a broad array of deficits affecting function, form, and psychological well-being. Assessment scales were introduced to standardize and document the features of facial palsy and to facilitate the exchange of information and comparison of outcomes. The aim of this study was to determine which assessment methodologies are currently employed by those involved in the care of patients with facial palsy as a first step toward the development of consensus on the appropriate assessments for this patient population. Online questionnaire. The Sir Charles Bell Society, a group of professionals dedicated to the care of patients with facial palsy, were surveyed to determine the scales used to document facial nerve function, patient reported outcome measures (PROM), and photographic documentation. Fifty-five percent of the membership responded (n = 83). Grading scales were used by 95%, most commonly the House-Brackmann and Sunnybrook scales. PROMs were used by 58%, typically the Facial Clinimetric Evaluation scale or Facial Disability Index. All used photographic recordings, but variability existed among the facial expressions used. Videography was performed by 82%, and mostly involved the same views as still photography; it was also used to document spontaneous movement and speech. Three-dimensional imaging was employed by 18% of respondents. There exists significant heterogeneity in assessments among clinicians, which impedes straightforward comparisons of outcomes following recovery and intervention. Widespread adoption of structured assessments, including scales, PROMs, photography, and videography, will facilitate communication and comparison among those who study the effects of interventions on this population. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  19. The success of free gracilis muscle transfer to restore smile in patients with nonflaccid facial paralysis.

    PubMed

    Lindsay, Robin W; Bhama, Prabhat; Weinberg, Julie; Hadlock, Tessa A

    2014-08-01

    Development of synkinesis, hypertonicity, and poor smile excursion after facial nerve insult and recovery contribute to disfigurement, psychological difficulties, and an inability to convey emotion via facial expression. Despite treatment with physical therapy and chemodenervation, some patients who recover from transient flaccid facial paralysis never spontaneously regain the ability to perform a meaningful smile. Prospective evaluation was performed on 20 patients with nonflaccid facial paralysis who underwent free gracilis muscle transfer. Patients were evaluated using the quality-of-life (QOL) FaCE survey, Facial Nerve Grading Scale, and Facegram to quantify QOL improvement, smile excursion, and symmetry after muscle transfer. A statistically significant increase in the FaCE score was seen after muscle transfer (paired 2-tailed t test, P < 0.039). In addition, there was a statistically significant improvement in the smile score on the Facial Nerve Grading Scale (P < 0.002), in the lower lip length at rest (P = 0.01) and with smile (P = 0.0001), and with smile symmetry (P = 0.0077) after surgery. Free gracilis muscle transfer has become a mainstay in the management armamentarium for patients who develop severe reduction in oral commissure movement after facial nerve insult and recovery. The operation achieves a high overall success rate, and innovations involving transplanting thinner segments of muscle avoid a cosmetic deformity secondary to excess bulk. This study demonstrates a quantitative improvement in QOL and facial function after free gracilis muscle transfer in patients who failed to achieve a meaningful smile after physical therapy.

  20. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  1. Behavioural responses to facial and postural expressions of emotion: An interpersonal circumplex approach.

    PubMed

    Aan Het Rot, Marije; Enea, Violeta; Dafinoiu, Ion; Iancu, Sorina; Taftă, Steluţa A; Bărbuşelu, Mariana

    2017-11-01

    While the recognition of emotional expressions has been extensively studied, the behavioural response to these expressions has not. In the interpersonal circumplex, behaviour is defined in terms of communion and agency. In this study, we examined behavioural responses to both facial and postural expressions of emotion. We presented 101 Romanian students with facial and postural stimuli involving individuals ('targets') expressing happiness, sadness, anger, or fear. Using an interpersonal grid, participants simultaneously indicated how communal (i.e., quarrelsome or agreeable) and agentic (i.e., dominant or submissive) they would be towards people displaying these expressions. Participants were agreeable-dominant towards targets showing happy facial expressions and primarily quarrelsome towards targets with angry or fearful facial expressions. Responses to targets showing sad facial expressions were neutral on both dimensions of interpersonal behaviour. Postural versus facial expressions of happiness and anger elicited similar behavioural responses. Participants responded in a quarrelsome-submissive way to fearful postural expressions and in an agreeable way to sad postural expressions. Behavioural responses to the various facial expressions were largely comparable to those previously observed in Dutch students. Observed differences may be explained from participants' cultural background. Responses to the postural expressions largely matched responses to the facial expressions. © 2017 The British Psychological Society.

  2. Positive facial expressions during retrieval of self-defining memories.

    PubMed

    Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad

    2017-11-14

    In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.

  3. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  5. Forming Facial Expressions Influences Assessment of Others' Dominance but Not Trustworthiness.

    PubMed

    Ueda, Yoshiyuki; Nagoya, Kie; Yoshikawa, Sakiko; Nomura, Michio

    2017-01-01

    Forming specific facial expressions influences emotions and perception. Bearing this in mind, studies should be reconsidered in which observers expressing neutral emotions inferred personal traits from the facial expressions of others. In the present study, participants were asked to make happy, neutral, and disgusted facial expressions: for "happy," they held a wooden chopstick in their molars to form a smile; for "neutral," they clasped the chopstick between their lips, making no expression; for "disgusted," they put the chopstick between their upper lip and nose and knit their brows in a scowl. However, they were not asked to intentionally change their emotional state. Observers judged happy expression images as more trustworthy, competent, warm, friendly, and distinctive than disgusted expression images, regardless of the observers' own facial expression. Observers judged disgusted expression images as more dominant than happy expression images. However, observers expressing disgust overestimated dominance in observed disgusted expression images and underestimated dominance in happy expression images. In contrast, observers with happy facial forms attenuated dominance for disgusted expression images. These results suggest that dominance inferred from facial expressions is unstable and influenced by not only the observed facial expression, but also the observers' own physiological states.

  6. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease

    PubMed Central

    Livingstone, Steven R.; Vezer, Esztella; McGarry, Lucy M.; Lang, Anthony E.; Russo, Frank A.

    2016-01-01

    Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed). Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD. PMID:27375505

  7. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  8. Attentional processing of other's facial display of pain: an eye tracking study.

    PubMed

    Vervoort, Tine; Trost, Zina; Prkachin, Kenneth M; Mueller, Sven C

    2013-06-01

    The present study investigated the role of observer pain catastrophizing and personal pain experience as possible moderators of attention to varying levels of facial pain expression in others. Eye movements were recorded as a direct and continuous index of attention allocation in a sample of 35 undergraduate students while viewing slides presenting picture pairs consisting of a neutral face combined with either a low, moderate, or high expressive pain face. Initial orienting of attention was measured as latency and duration of first fixation to 1 of 2 target images (i.e., neutral face vs pain face). Attentional maintenance was measured by gaze duration. With respect to initial orienting to pain, findings indicated that participants reporting low catastrophizing directed their attention more quickly to pain faces than to neutral faces, with fixation becoming increasingly faster with increasing levels of facial pain expression. In comparison, participants reporting high levels of catastrophizing showed decreased tendency to initially orient to pain faces, fixating equally quickly on neutral and pain faces. Duration of the first fixation revealed no significant effects. With respect to attentional maintenance, participants reporting high catastrophizing and pain intensity demonstrated significantly longer gaze duration for all face types (neutral and pain expression), relative to low catastrophizing counterparts. Finally, independent of catastrophizing, higher reported pain intensity contributed to decreased attentional maintenance to pain faces vs neutral faces. Theoretical implications and further research directions are discussed. Copyright © 2013 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  9. Cortical orofacial motor representation in Old World monkeys, great apes, and humans. I. Quantitative analysis of cytoarchitecture.

    PubMed

    Sherwood, Chet C; Holloway, Ralph L; Erwin, Joseph M; Schleicher, Axel; Zilles, Karl; Hof, Patrick R

    2004-01-01

    Social life in anthropoid primates is mediated by interindividual communication, involving movements of the orofacial muscles for the production of vocalization and gestural expression. Although phylogenetic diversity has been reported in the auditory and visual communication systems of primates, little is known about the comparative neuroanatomy that subserves orofacial movement. The current study reports results from quantitative image analysis of the region corresponding to orofacial representation of primary motor cortex (Brodmann's area 4) in several catarrhine primate species (Macaca fascicularis, Papio anubis, Pongo pygmaeus, Gorilla gorilla, Pan troglodytes, and Homo sapiens) using the Grey Level Index method. This cortical region has been implicated in the execution of skilled motor activities such as voluntary facial expression and human speech. Density profiles of the laminar distribution of Nissl-stained neuronal somata were acquired from high-resolution images to quantify cytoarchitectural patterns. Despite general similarity in these profiles across catarrhines, multivariate analysis showed that cytoarchitectural patterns of individuals were more similar within-species versus between-species. Compared to Old World monkeys, the orofacial representation of area 4 in great apes and humans was characterized by an increased relative thickness of layer III and overall lower cell volume densities, providing more neuropil space for interconnections. These phylogenetic differences in microstructure might provide an anatomical substrate for the evolution of greater volitional fine motor control of facial expressions in great apes and humans. Copyright 2004 S. Karger AG, Basel

  10. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  11. Face Generation Using Emotional Regions for Sensibility Robot

    NASA Astrophysics Data System (ADS)

    Gotoh, Minori; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori

    We think that psychological interaction is necessary for smooth communication between robots and people. One way to psychologically interact with others is through facial expressions. Facial expressions are very important for communication because they show true emotions and feelings. The ``Ifbot'' robot communicates with people by considering its own ``emotions''. Ifbot has many facial expressions to communicate enjoyment. We developed a method for generating facial expressions based on human subjective judgements mapping Ifbot's facial expressions to its emotions. We first created Ifbot's emotional space to map its facial expressions. We applied a five-layer auto-associative neural network to the space. We then subjectively evaluated the emotional space and created emotional regions based on the results. We generated emotive facial expressions using the emotional regions.

  12. Compound facial expressions of emotion: from basic research to clinical applications

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2015-01-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions. PMID:26869845

  13. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions. PMID:22438875

  14. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  15. When Age Matters: Differences in Facial Mimicry and Autonomic Responses to Peers' Emotions in Teenagers and Adults

    PubMed Central

    Ardizzi, Martina; Sestito, Mariateresa; Martini, Francesca; Umiltà, Maria Alessandra; Ravera, Roberto; Gallese, Vittorio

    2014-01-01

    Age-group membership effects on explicit emotional facial expressions recognition have been widely demonstrated. In this study we investigated whether Age-group membership could also affect implicit physiological responses, as facial mimicry and autonomic regulation, to observation of emotional facial expressions. To this aim, facial Electromyography (EMG) and Respiratory Sinus Arrhythmia (RSA) were recorded from teenager and adult participants during the observation of facial expressions performed by teenager and adult models. Results highlighted that teenagers exhibited greater facial EMG responses to peers' facial expressions, whereas adults showed higher RSA-responses to adult facial expressions. The different physiological modalities through which young and adults respond to peers' emotional expressions are likely to reflect two different ways to engage in social interactions with coetaneous. Findings confirmed that age is an important and powerful social feature that modulates interpersonal interactions by influencing low-level physiological responses. PMID:25337916

  16. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  17. Emotional facial activation induced by unconsciously perceived dynamic facial expressions.

    PubMed

    Kaiser, Jakob; Davey, Graham C L; Parkhouse, Thomas; Meeres, Jennifer; Scott, Ryan B

    2016-12-01

    Do facial expressions of emotion influence us when not consciously perceived? Methods to investigate this question have typically relied on brief presentation of static images. In contrast, real facial expressions are dynamic and unfold over several seconds. Recent studies demonstrate that gaze contingent crowding (GCC) can block awareness of dynamic expressions while still inducing behavioural priming effects. The current experiment tested for the first time whether dynamic facial expressions presented using this method can induce unconscious facial activation. Videos of dynamic happy and angry expressions were presented outside participants' conscious awareness while EMG measurements captured activation of the zygomaticus major (active when smiling) and the corrugator supercilii (active when frowning). Forced-choice classification of expressions confirmed they were not consciously perceived, while EMG revealed significant differential activation of facial muscles consistent with the expressions presented. This successful demonstration opens new avenues for research examining the unconscious emotional influences of facial expressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  19. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical axis. Published by Elsevier Ltd.

  20. The influence of context on distinct facial expressions of disgust.

    PubMed

    Reschke, Peter J; Walle, Eric A; Knothe, Jennifer M; Lopez, Lukas D

    2018-06-11

    Face perception is susceptible to contextual influence and perceived physical similarities between emotion cues. However, studies often use structurally homogeneous facial expressions, making it difficult to explore how within-emotion variability in facial configuration affects emotion perception. This study examined the influence of context on the emotional perception of categorically identical, yet physically distinct, facial expressions of disgust. Participants categorized two perceptually distinct disgust facial expressions, "closed" (i.e., scrunched nose, closed mouth) and "open" (i.e., scrunched nose, open mouth, protruding tongue), that were embedded in contexts comprising emotion postures and scenes. Results demonstrated that the effect of nonfacial elements was significantly stronger for "open" disgust facial expressions than "closed" disgust facial expressions. These findings provide support that physical similarity within discrete categories of facial expressions is mutable and plays an important role in affective face perception. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. More emotional facial expressions during episodic than during semantic autobiographical retrieval.

    PubMed

    El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis

    2016-04-01

    There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.

  2. Scales of degree of facial paralysis: analysis of agreement.

    PubMed

    Fonseca, Kércia Melo de Oliveira; Mourão, Aline Mansueto; Motta, Andréa Rodrigues; Vicente, Laelia Cristina Caseiro

    2015-01-01

    It has become common to use scales to measure the degree of involvement of facial paralysis in phonoaudiological clinics. To analyze the inter- and intra-rater agreement of the scales of degree of facial paralysis and to elicit point of view of the appraisers regarding their use. Cross-sectional observational clinical study of the Chevalier and House & Brackmann scales performed by five speech therapists with clinical experience, who analyzed the facial expression of 30 adult subjects with impaired facial movements two times, with a one week interval between evaluations. The kappa analysis was employed. There was excellent inter-rater agreement for both scales (kappa>0.80), and on the Chevalier scale a substantial intra-rater agreement in the first assessment (kappa=0.792) and an excellent agreement in the second assessment (kappa=0.928). The House & Brackmann scale showed excellent agreement at both assessments (kappa=0.850 and 0.857). As for the appraisers' point of view, one appraiser thought prior training is necessary for the Chevalier scale and, four appraisers felt that training is important for the House & Brackmann scale. Both scales have good inter- and intra-rater agreement and most of the appraisers agree on the ease and relevance of the application of these scales. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  4. Structural and temporal requirements of Wnt/PCP protein Vangl2 function for convergence and extension movements and facial branchiomotor neuron migration in zebrafish.

    PubMed

    Pan, Xiufang; Sittaramane, Vinoth; Gurung, Suman; Chandrasekhar, Anand

    2014-02-01

    Van gogh-like 2 (Vangl2), a core component of the Wnt/planar cell polarity (PCP) signaling pathway, is a four-pass transmembrane protein with N-terminal and C-terminal domains located in the cytosol, and is structurally conserved from flies to mammals. In vertebrates, Vangl2 plays an essential role in convergence and extension (CE) movements during gastrulation and in facial branchiomotor (FBM) neuron migration in the hindbrain. However, the roles of specific Vangl2 domains, of membrane association, and of specific extracellular and intracellular motifs have not been examined, especially in the context of FBM neuron migration. Through heat shock-inducible expression of various Vangl2 transgenes, we found that membrane associated functions of the N-terminal and C-terminal domains of Vangl2 are involved in regulating FBM neuron migration. Importantly, through temperature shift experiments, we found that the critical period for Vangl2 function coincides with the initial stages of FBM neuron migration out of rhombomere 4. Intriguingly, we have also uncovered a putative nuclear localization motif in the C-terminal domain that may play a role in regulating CE movements. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    PubMed

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  6. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The Not Face: A grammaticalization of facial expressions of emotion

    PubMed Central

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  8. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    NASA Astrophysics Data System (ADS)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  9. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    PubMed Central

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A.; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual analysis of philogenetically old behavioral patterns like the facial expressions of emotions. PMID:28690565

  10. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors.

    PubMed

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims' recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims' performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual analysis of philogenetically old behavioral patterns like the facial expressions of emotions.

  11. Psychometric challenges and proposed solutions when scoring facial emotion expression codes.

    PubMed

    Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver

    2014-12-01

    Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.

  12. Facial morphology and children's categorization of facial expressions of emotions: a comparison between Asian and Caucasian faces.

    PubMed

    Gosselin, P; Larocque, C

    2000-09-01

    The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.

  13. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  14. A Facial Control Method Using Emotional Parameters in Sensibility Robot

    NASA Astrophysics Data System (ADS)

    Shibata, Hiroshi; Kanoh, Masayoshi; Kato, Shohei; Kunitachi, Tsutomu; Itoh, Hidenori

    The “Ifbot” robot communicates with people by considering its own “emotions”. Ifbot has many facial expressions to communicate enjoyment. These are used to express its internal emotions, purposes, reactions caused by external stimulus, and entertainment such as singing songs. All these facial expressions are developed by designers manually. Using this approach, we must design all facial motions, if we want Ifbot to express them. It, however, is not realistic. We have therefore developed a system which convert Ifbot's emotions to its facial expressions automatically. In this paper, we propose a method for creating Ifbot's facial expressions from parameters, emotional parameters, which handle its internal emotions computationally.

  15. Categorical Perception of Affective and Linguistic Facial Expressions

    ERIC Educational Resources Information Center

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  16. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    PubMed

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s < 0.05). Patients also yielded worse Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all P s < 0.001). Altered facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  17. Developing an eBook-Integrated High-Fidelity Mobile App Prototype for Promoting Child Motor Skills and Taxonomically Assessing Children's Emotional Responses Using Face and Sound Topology.

    PubMed

    Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe

    2014-01-01

    Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce "StorySense", an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children's motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage "low-motor" interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child's gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism.

  18. A 3D character animation engine for multimodal interaction on mobile devices

    NASA Astrophysics Data System (ADS)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  19. Facial Movements Facilitate Part-Based, Not Holistic, Processing in Children, Adolescents, and Adults

    ERIC Educational Resources Information Center

    Xiao, Naiqi G.; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2017-01-01

    Although most of the faces we encounter daily are moving ones, much of what we know about face processing and its development is based on studies using static faces that emphasize holistic processing as the hallmark of mature face processing. Here the authors examined the effects of facial movements on face processing developmentally in children…

  20. Hypoglossal-facial nerve anastomosis and rehabilitation in patients with complete facial palsy: cohort study of 30 patients followed up for three years

    PubMed Central

    Toffola, Elena Dalla; Pavese, Chiara; Cecini, Miriam; Petrucci, Lucia; Ricotti, Susanna; Bejor, Maurizio; Salimbeni, Grazia; Biglioli, Federico; Klersy, Catherine

    2014-01-01

    Summary Our study evaluates the grade and timing of recovery in 30 patients with complete facial paralysis (House-Brackmann grade VI) treated with hypoglossal-facial nerve (XII-VII) anastomosis and a long-term rehabilitation program, consisting of exercises in facial muscle activation mediated by tongue movement and synkinesis control with mirror feedback. Reinnervation after XII-VII anastomosis occurred in 29 patients, on average 5.4 months after surgery. Three years after the anastomosis, 23.3% of patients had grade II, 53.3% grade III, 20% grade IV and 3.3% grade VI ratings on the House-Brackmann scale. Time to reinnervation was associated with the final House-Brackmann grade. Our study demonstrates that patients undergoing XII-VII anastomosis and a long-term rehabilitation program display a significant recovery of facial symmetry and movement. The recovery continues for at least three years after the anastomosis, meaning that prolonged follow-up of these patients is advisable. PMID:25473738

  1. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  2. Seeing emotions: a review of micro and subtle emotion expression training

    NASA Astrophysics Data System (ADS)

    Poole, Ernest Andre

    2016-09-01

    In this review I explore and discuss the use of micro and subtle expression training in the social sciences. These trainings, offered commercially, are designed and endorsed by noted psychologist Paul Ekman, co-author of the Facial Action Coding System, a comprehensive system of measuring muscular movement in the face and its relationship to the expression of emotions. The trainings build upon that seminal work and present them in a way for either the layperson or researcher to easily add to their personal toolbox for a variety of purposes. Outlined are my experiences across the training products, how they could be used in social science research, a brief comparison to automated systems, and possible next steps.

  3. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  4. A large-scale analysis of sex differences in facial expressions

    PubMed Central

    Kodra, Evan; el Kaliouby, Rana; LaFrance, Marianne

    2017-01-01

    There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants’ own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state. PMID:28422963

  5. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  6. Sparse coding for flexible, robust 3D facial-expression synthesis.

    PubMed

    Lin, Yuxu; Song, Mingli; Quynh, Dao Thi Phuong; He, Ying; Chen, Chun

    2012-01-01

    Computer animation researchers have been extensively investigating 3D facial-expression synthesis for decades. However, flexible, robust production of realistic 3D facial expressions is still technically challenging. A proposed modeling framework applies sparse coding to synthesize 3D expressive faces, using specified coefficients or expression examples. It also robustly recovers facial expressions from noisy and incomplete data. This approach can synthesize higher-quality expressions in less time than the state-of-the-art techniques.

  7. Facial expressions recognition with an emotion expressive robotic head

    NASA Astrophysics Data System (ADS)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  8. Isolated facial myokymia as a presenting feature of pontine neurocysticercosis.

    PubMed

    Bhatia, Rohit; Desai, Soaham; Garg, Ajay; Padma, Madakasira V; Prasad, Kameshwar; Tripathi, Manjari

    2008-01-01

    A 45-year-old healthy man presented with 2 weeks history of continuous rippling and quivering movements of his right side of face and neck suggestive of myokymia. MRI scan of the head revealed neurocysticercus in the pons. Treatment with steroids and carbamezapine produced a significant benefit. This is the first report of pontine neurocysticercosis presenting as an isolated facial myokymia. 2007 Movement Disorder Society

  9. Rat Whisker Movement after Facial Nerve Lesion: Evidence for Autonomic Contraction of Skeletal Muscle

    PubMed Central

    Heaton, James T.; Sheu, Shu-Hsien; Hohman, Marc H.; Knox, Christopher J.; Weinberg, Julie S.; Kleiss, Ingrid J.; Hadlock, Tessa A.

    2014-01-01

    Vibrissal whisking is often employed to track facial nerve regeneration in rats; however, we have observed similar degrees of whisking recovery after facial nerve transection with or without repair. We hypothesized that the source of non-facial nerve-mediated whisker movement after chronic denervation was from autonomic, cholinergic axons traveling within the infraorbital branch of the trigeminal nerve (ION). Rats underwent unilateral facial nerve transection with repair (N=7) or resection without repair (N=11). Post-operative whisking amplitude was measured weekly across 10 weeks, and during intraoperative stimulation of the ION and facial nerves at ≥18 weeks. Whisking was also measured after subsequent ION transection (N=6) or pharmacologic blocking of the autonomic ganglia using hexamethonium (N=3), and after snout cooling intended to elicit a vasodilation reflex (N=3). Whisking recovered more quickly and with greater amplitude in rats that underwent facial nerve repair compared to resection (P<0.05), but individual rats overlapped in whisking amplitude across both groups. In the resected rats, non-facial-nerve mediated whisking was elicited by electrical stimulation of the ION, temporarily diminished following hexamethonium injection, abolished by transection of the ION, and rapidly and significantly (P<0.05) increased by snout cooling. Moreover, fibrillation-related whisker movements decreased in all rats during the initial recovery period (indicative of reinnervation), but re-appeared in the resected rats after undergoing ION transection (indicative of motor denervation). Cholinergic, parasympathetic axons traveling within the ION innervate whisker pad vasculature, and immunohistochemistry for vasoactive intestinal peptide revealed these axons branching extensively over whisker pad muscles and contacting neuromuscular junctions after facial nerve resection. This study provides the first behavioral and anatomical evidence of spontaneous autonomic innervation of skeletal muscle after motor nerve lesion, which not only has implications for interpreting facial nerve reinnervation results, but also calls into question whether autonomic-mediated innervation of striated muscle occurs naturally in other forms of neuropathy. PMID:24480367

  10. Rat whisker movement after facial nerve lesion: evidence for autonomic contraction of skeletal muscle.

    PubMed

    Heaton, James T; Sheu, Shu Hsien; Hohman, Marc H; Knox, Christopher J; Weinberg, Julie S; Kleiss, Ingrid J; Hadlock, Tessa A

    2014-04-18

    Vibrissal whisking is often employed to track facial nerve regeneration in rats; however, we have observed similar degrees of whisking recovery after facial nerve transection with or without repair. We hypothesized that the source of non-facial nerve-mediated whisker movement after chronic denervation was from autonomic, cholinergic axons traveling within the infraorbital branch of the trigeminal nerve (ION). Rats underwent unilateral facial nerve transection with repair (N=7) or resection without repair (N=11). Post-operative whisking amplitude was measured weekly across 10weeks, and during intraoperative stimulation of the ION and facial nerves at ⩾18weeks. Whisking was also measured after subsequent ION transection (N=6) or pharmacologic blocking of the autonomic ganglia using hexamethonium (N=3), and after snout cooling intended to elicit a vasodilation reflex (N=3). Whisking recovered more quickly and with greater amplitude in rats that underwent facial nerve repair compared to resection (P<0.05), but individual rats overlapped in whisking amplitude across both groups. In the resected rats, non-facial-nerve-mediated whisking was elicited by electrical stimulation of the ION, temporarily diminished following hexamethonium injection, abolished by transection of the ION, and rapidly and significantly (P<0.05) increased by snout cooling. Moreover, fibrillation-related whisker movements decreased in all rats during the initial recovery period (indicative of reinnervation), but re-appeared in the resected rats after undergoing ION transection (indicative of motor denervation). Cholinergic, parasympathetic axons traveling within the ION innervate whisker pad vasculature, and immunohistochemistry for vasoactive intestinal peptide revealed these axons branching extensively over whisker pad muscles and contacting neuromuscular junctions after facial nerve resection. This study provides the first behavioral and anatomical evidence of spontaneous autonomic innervation of skeletal muscle after motor nerve lesion, which not only has implications for interpreting facial nerve reinnervation results, but also calls into question whether autonomic-mediated innervation of striated muscle occurs naturally in other forms of neuropathy. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Viewing distance matter to perceived intensity of facial expressions

    PubMed Central

    Gerhardsson, Andreas; Högman, Lennart; Fischer, Håkan

    2015-01-01

    In our daily perception of facial expressions, we depend on an ability to generalize across the varied distances at which they may appear. This is important to how we interpret the quality and the intensity of the expression. Previous research has not investigated whether this so called perceptual constancy also applies to the experienced intensity of facial expressions. Using a psychophysical measure (Borg CR100 scale) the present study aimed to further investigate perceptual constancy of happy and angry facial expressions at varied sizes, which is a proxy for varying viewing distances. Seventy-one (42 females) participants rated the intensity and valence of facial expressions varying in distance and intensity. The results demonstrated that the perceived intensity (PI) of the emotional facial expression was dependent on the distance of the face and the person perceiving it. An interaction effect was noted, indicating that close-up faces are perceived as more intense than faces at a distance and that this effect is stronger the more intense the facial expression truly is. The present study raises considerations regarding constancy of the PI of happy and angry facial expressions at varied distances. PMID:26191035

  13. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    PubMed

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  14. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions

    PubMed Central

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions. PMID:26090223

  15. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  16. 4D ultrasound study of fetal facial expressions in the third trimester of pregnancy.

    PubMed

    AboEllail, Mohamed Ahmed Mostafa; Kanenishi, Kenji; Mori, Nobuhiro; Mohamed, Osman Abdel Kareem; Hata, Toshiyuki

    2018-07-01

    To evaluate the frequencies of fetal facial expressions in the third trimester of pregnancy, when fetal brain maturation and development are progressing in normal healthy fetuses. Four-dimensional (4 D) ultrasound was used to examine the facial expressions of 111 healthy fetuses between 30 and 40 weeks of gestation. The frequencies of seven facial expressions (mouthing, yawning, smiling, tongue expulsion, scowling, sucking, and blinking) during 15-minute recordings were assessed. The fetuses were further divided into three gestational age groups (25 fetuses at 30-31 weeks, 43 at 32-35 weeks, and 43 at ≥36 weeks). Comparison of facial expressions among the three gestational age groups was performed to determine their changes with advancing gestation. Mouthing was the most frequent facial expression at 30-40 weeks of gestation, followed by blinking. Both facial expressions were significantly more frequent than the other expressions (p < .05). The frequency of yawning decreased with the gestational age after 30 weeks of gestation (p = .031). Other facial expressions did not change between 30 and 40 weeks. The frequency of yawning at 30-31 weeks was significantly higher than that at 36-40 weeks (p < .05). There were no significant differences in the other facial expressions among the three gestational age groups. Our results suggest that 4D ultrasound assessment of fetal facial expressions may be a useful modality for evaluating fetal brain maturation and development. The decreasing frequency of fetal yawning after 30 weeks of gestation may explain the emergence of distinct states of arousal.

  17. When Early Experiences Build a Wall to Others’ Emotions: An Electrophysiological and Autonomic Study

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Sestito, Mariateresa; Ravera, Roberto; Gallese, Vittorio

    2013-01-01

    Facial expression of emotions is a powerful vehicle for communicating information about others’ emotional states and it normally induces facial mimicry in the observers. The aim of this study was to investigate if early aversive experiences could interfere with emotion recognition, facial mimicry, and with the autonomic regulation of social behaviors. We conducted a facial emotion recognition task in a group of “street-boys” and in an age-matched control group. We recorded facial electromyography (EMG), a marker of facial mimicry, and respiratory sinus arrhythmia (RSA), an index of the recruitment of autonomic system promoting social behaviors and predisposition, in response to the observation of facial expressions of emotions. Results showed an over-attribution of anger, and reduced EMG responses during the observation of both positive and negative expressions only among street-boys. Street-boys also showed lower RSA after observation of facial expressions and ineffective RSA suppression during presentation of non-threatening expressions. Our findings suggest that early aversive experiences alter not only emotion recognition but also facial mimicry of emotions. These deficits affect the autonomic regulation of social behaviors inducing lower social predisposition after the visualization of facial expressions and an ineffective recruitment of defensive behavior in response to non-threatening expressions. PMID:23593374

  18. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    ERIC Educational Resources Information Center

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  19. The Impact of Experience on Affective Responses during Action Observation.

    PubMed

    Kirsch, Louise P; Snagg, Arielle; Heerey, Erin; Cross, Emily S

    2016-01-01

    Perceiving others in action elicits affective and aesthetic responses in observers. The present study investigates the extent to which these responses relate to an observer's general experience with observed movements. Facial electromyographic (EMG) responses were recorded in experienced dancers and non-dancers as they watched short videos of movements performed by professional ballet dancers. Responses were recorded from the corrugator supercilii (CS) and zygomaticus major (ZM) muscles, both of which show engagement during the observation of affect-evoking stimuli. In the first part of the experiment, participants passively watched the videos while EMG data were recorded. In the second part, they explicitly rated how much they liked each movement. Results revealed a relationship between explicit affective judgments of the movements and facial muscle activation only among those participants who were experienced with the movements. Specifically, CS activity was higher for disliked movements and ZM activity was higher for liked movements among dancers but not among non-dancers. The relationship between explicit liking ratings and EMG data in experienced observers suggests that facial muscles subtly echo affective judgments even when viewing actions that are not intentionally emotional in nature, thus underscoring the potential of EMG as a method to examine subtle shifts in implicit affective responses during action observation.

  20. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    ERIC Educational Resources Information Center

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  1. Development of a Support Application and a Textbook for Practicing Facial Expression Detection for Students with Visual Impairment

    ERIC Educational Resources Information Center

    Saito, Hirotaka; Ando, Akinobu; Itagaki, Shota; Kawada, Taku; Davis, Darold; Nagai, Nobuyuki

    2017-01-01

    Until now, when practicing facial expression recognition skills in nonverbal communication areas of SST, judgment of facial expression was not quantitative because the subjects of SST were judged by teachers. Therefore, we thought whether SST could be performed using facial expression detection devices that can quantitatively measure facial…

  2. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    ERIC Educational Resources Information Center

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  3. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  4. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  5. That "poker face" just might lose you the game! The impact of expressive suppression and mimicry on sensitivity to facial expressions of emotion.

    PubMed

    Schneider, Kristin G; Hempel, Roelie J; Lynch, Thomas R

    2013-10-01

    Successful interpersonal functioning often requires both the ability to mask inner feelings and the ability to accurately recognize others' expressions--but what if effortful control of emotional expressions impacts the ability to accurately read others? In this study, we examined the influence of self-controlled expressive suppression and mimicry on facial affect sensitivity--the speed with which one can accurately identify gradually intensifying facial expressions of emotion. Muscle activity of the brow (corrugator, related to anger), upper lip (levator, related to disgust), and cheek (zygomaticus, related to happiness) were recorded using facial electromyography while participants randomized to one of three conditions (Suppress, Mimic, and No-Instruction) viewed a series of six distinct emotional expressions (happiness, sadness, fear, anger, surprise, and disgust) as they morphed from neutral to full expression. As hypothesized, individuals instructed to suppress their own facial expressions showed impairment in facial affect sensitivity. Conversely, mimicry of emotion expressions appeared to facilitate facial affect sensitivity. Results suggest that it is difficult for a person to be able to simultaneously mask inner feelings and accurately "read" the facial expressions of others, at least when these expressions are at low intensity. The combined behavioral and physiological data suggest that the strategies an individual selects to control his or her own expression of emotion have important implications for interpersonal functioning.

  6. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  8. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  9. Brain correlates to facial motor imagery and its somatotopy in the primary motor cortex.

    PubMed

    Soliman, Ramy S; Lee, Sanghoon; Eun, Seulgi; Mohamed, Abdalla Z; Lee, Jeungchan; Lee, Eunyoung; Makary, Meena M; Kathy Lee, Seung Min; Lee, Hwa-Jin; Choi, Woo Suk; Park, Kyungmo

    2017-03-22

    Motor imagery (MI) has attracted increased interest for motor rehabilitation as many studies have shown that MI shares the same neural networks as motor execution (ME). Nevertheless, MI in terms of facial movement has not been studied extensively; thus, in the present study, we investigated shared neural networks between facial motor imagery (FMI) and facial motor execution (FME). In addition, FMI somatotopy within-face was investigated between the forehead and the mouth. Functional MRI was used to examine 34 healthy individuals with ME and MI paradigms for the forehead and the mouth. The general linear model and a paired t-test were performed to define the facial area in the primary motor cortex (M1) and this area has been used to investigate somatotopy between the forehead and mouth FMI. FMI recruited similar brain motor areas as FME, but showed less neural activity in all activated regions. The facial areas in M1 were distinguishable from other body movements such as finger movement. Further investigation of this area showed that forehead and mouth imagery tended to lack a somatotopic representation for position on M1, and yet had distinct characteristics in terms of neural activity level. FMI showed different characteristics from general MI as the former exclusively activated facial processing areas. In addition, FME and FMI showed different characteristics in terms of BOLD signal level, while sharing the same neural areas. The results imply a potential usefulness of MI training for rehabilitation of facial motor disease considering that forehead and mouth somatotopy showed no clear position difference, and yet showed a significant BOLD signal intensity variation.

  10. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    PubMed

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks.

  11. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks. PMID:23185595

  12. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    PubMed

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  13. Decoding Facial Expressions: A New Test with Decoding Norms.

    ERIC Educational Resources Information Center

    Leathers, Dale G.; Emigh, Ted H.

    1980-01-01

    Describes the development and testing of a new facial meaning sensitivity test designed to determine how specialized are the meanings that can be decoded from facial expressions. Demonstrates the use of the test to measure a receiver's current level of skill in decoding facial expressions. (JMF)

  14. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  15. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  16. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-03-19

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  17. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  18. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    PubMed

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  19. Can an anger face also be scared? Malleability of facial expressions.

    PubMed

    Widen, Sherri C; Naab, Pamela

    2012-10-01

    Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking.

  20. Deception Detection in Multicultural Coalitions: Foundations for a Cognitive Model

    DTIC Science & Technology

    2011-06-01

    and spontaneous vs. deliberate and contrived facial expression of emotions , symmetry, leakage through microexpressions, hand postures, dynamic...sequences of visually detectable cues , such as facial muscle-group coordination and correlations expressed as changes in facial expressions and face...concert, whereas facial expressions of deceivers emphasize a few cues that arise more randomly and chaotically [15]. A smile without the use of

  1. Dissociation of sad facial expressions and autonomic nervous system responding in boys with disruptive behavior disorders

    PubMed Central

    Marsh, Penny; Beauchaine, Theodore P.; Williams, Bailey

    2009-01-01

    Although deficiencies in emotional responding have been linked to externalizing behaviors in children, little is known about how discrete response systems (e.g., expressive, physiological) are coordinated during emotional challenge among these youth. We examined time-linked correspondence of sad facial expressions and autonomic reactivity during an empathy-eliciting task among boys with disruptive behavior disorders (n = 31) and controls (n = 23). For controls, sad facial expressions were associated with reduced sympathetic (lower skin conductance level, lengthened cardiac preejection period [PEP]) and increased parasympathetic (higher respiratory sinus arrhythmia [RSA]) activity. In contrast, no correspondence between facial expressions and autonomic reactivity was observed among boys with conduct problems. Furthermore, low correspondence between facial expressions and PEP predicted externalizing symptom severity, whereas low correspondence between facial expressions and RSA predicted internalizing symptom severity. PMID:17868261

  2. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  3. Intranasal oxytocin increases facial expressivity, but not ratings of trustworthiness, in patients with schizophrenia and healthy controls.

    PubMed

    Woolley, J D; Chuang, B; Fussell, C; Scherer, S; Biagianti, B; Fulford, D; Mathalon, D H; Vinogradov, S

    2017-05-01

    Blunted facial affect is a common negative symptom of schizophrenia. Additionally, assessing the trustworthiness of faces is a social cognitive ability that is impaired in schizophrenia. Currently available pharmacological agents are ineffective at improving either of these symptoms, despite their clinical significance. The hypothalamic neuropeptide oxytocin has multiple prosocial effects when administered intranasally to healthy individuals and shows promise in decreasing negative symptoms and enhancing social cognition in schizophrenia. Although two small studies have investigated oxytocin's effects on ratings of facial trustworthiness in schizophrenia, its effects on facial expressivity have not been investigated in any population. We investigated the effects of oxytocin on facial emotional expressivity while participants performed a facial trustworthiness rating task in 33 individuals with schizophrenia and 35 age-matched healthy controls using a double-blind, placebo-controlled, cross-over design. Participants rated the trustworthiness of presented faces interspersed with emotionally evocative photographs while being video-recorded. Participants' facial expressivity in these videos was quantified by blind raters using a well-validated manualized approach (i.e. the Facial Expression Coding System; FACES). While oxytocin administration did not affect ratings of facial trustworthiness, it significantly increased facial expressivity in individuals with schizophrenia (Z = -2.33, p = 0.02) and at trend level in healthy controls (Z = -1.87, p = 0.06). These results demonstrate that oxytocin administration can increase facial expressivity in response to emotional stimuli and suggest that oxytocin may have the potential to serve as a treatment for blunted facial affect in schizophrenia.

  4. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan

    2009-07-01

    Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  5. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  6. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  8. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    PubMed Central

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  9. Cone beam tomographic study of facial structures characteristics at rest and wide smile, and their correlation with the facial types.

    PubMed

    Martins, Luciana Flaquer; Vigorito, Julio Wilson

    2013-01-01

    To determine the characteristics of facial soft tissues at rest and wide smile, and their possible relation to the facial type. We analyzed a sample of forty-eight young female adults, aged between 19.10 and 40 years old, with a mean age of 30.9 years, who had balanced profile and passive lip seal. Cone beam computed tomographies were performed at rest and wide smile postures on the entire sample which was divided into three groups according to individual facial types. Soft tissue features analysis of the lips, nose, zygoma and chin were done in sagittal, axial and frontal axis tomographic views. No differences were observed in any of the facial type variables for the static analysis of facial structures at both rest and wide smile postures. Dynamic analysis showed that brachifacial types are more sensitive to movement, presenting greater sagittal lip contraction. However, the lip movement produced by this type of face results in a narrow smile, with smaller tooth exposure area when compared with other facial types. Findings pointed out that the position of the upper lip should be ahead of the lower lip, and the latter, ahead of the pogonion. It was also found that the facial type does not impact the positioning of these structures. Additionally, the use of cone beam computed tomography may be a valuable method to study craniofacial features.

  10. Differences in holistic processing do not explain cultural differences in the recognition of facial expression.

    PubMed

    Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J

    2017-12-01

    The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants' perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.

  11. Caricaturing facial expressions.

    PubMed

    Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I

    2000-08-14

    The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.

  12. Face expressive lifting (FEL): an original surgical concept combined with bipolar radiofrequency.

    PubMed

    Divaris, Marc; Blugerman, Guillermo; Paul, Malcolm D

    2014-01-01

    Aging can lead to changes in facial expressions, transforming the positive youth expression of happiness to negative expressions as sadness, tiredness, and disgust. Local skin distension is another consequence of aging, which can be difficult to treat with rejuvenation procedures. The "face expressive lifting" (FEL) is an original concept in facial rejuvenation surgery. On the one hand, FEL integrates established convergent surgical techniques aiming to correct the age-related negative facial expressions. On the other hand, FEL incorporates novel bipolar RF technology aiming to correct local skin distension. One hundred twenty-six patients underwent FEL procedure. Facial expression and local skin distension were assessed with 2 years follow-up. There was a correction of negative facial expression for 96 patients (76 %) and a tightening of local skin distension in 100 % of cases. FEL is an effective procedure taking into account and able to correct both age-related negative changes in facial expression and local skin distension using radiofrequency. Level of Evidence: Level IV, therapeutic study.

  13. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  14. Emotional Representation in Facial Expression and Script: A Comparison between Normal and Autistic Children

    ERIC Educational Resources Information Center

    Balconi, Michela; Carrera, Alba

    2007-01-01

    The paper explored conceptual and lexical skills with regard to emotional correlates of facial stimuli and scripts. In two different experimental phases normal and autistic children observed six facial expressions of emotions (happiness, anger, fear, sadness, surprise, and disgust) and six emotional scripts (contextualized facial expressions). In…

  15. Neural Correlates of Facial Mimicry: Simultaneous Measurements of EMG and BOLD Responses during Perception of Dynamic Compared to Static Facial Expressions

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2018-01-01

    Facial mimicry (FM) is an automatic response to imitate the facial expressions of others. However, neural correlates of the phenomenon are as yet not well established. We investigated this issue using simultaneously recorded EMG and BOLD signals during perception of dynamic and static emotional facial expressions of happiness and anger. During display presentations, BOLD signals and zygomaticus major (ZM), corrugator supercilii (CS) and orbicularis oculi (OO) EMG responses were recorded simultaneously from 46 healthy individuals. Subjects reacted spontaneously to happy facial expressions with increased EMG activity in ZM and OO muscles and decreased CS activity, which was interpreted as FM. Facial muscle responses correlated with BOLD activity in regions associated with motor simulation of facial expressions [i.e., inferior frontal gyrus, a classical Mirror Neuron System (MNS)]. Further, we also found correlations for regions associated with emotional processing (i.e., insula, part of the extended MNS). It is concluded that FM involves both motor and emotional brain structures, especially during perception of natural emotional expressions. PMID:29467691

  16. A model for production, perception, and acquisition of actions in face-to-face communication.

    PubMed

    Kröger, Bernd J; Kopp, Stefan; Lowit, Anja

    2010-08-01

    The concept of action as basic motor control unit for goal-directed movement behavior has been used primarily for private or non-communicative actions like walking, reaching, or grasping. In this paper, literature is reviewed indicating that this concept can also be used in all domains of face-to-face communication like speech, co-verbal facial expression, and co-verbal gesturing. Three domain-specific types of actions, i.e. speech actions, facial actions, and hand-arm actions, are defined in this paper and a model is proposed that elucidates the underlying biological mechanisms of action production, action perception, and action acquisition in all domains of face-to-face communication. This model can be used as theoretical framework for empirical analysis or simulation with embodied conversational agents, and thus for advanced human-computer interaction technologies.

  17. [Facial expressions of negative emotions in clinical interviews: The development, reliability and validity of a categorical system for the attribution of functions to facial expressions of negative emotions].

    PubMed

    Bock, Astrid; Huber, Eva; Peham, Doris; Benecke, Cord

    2015-01-01

    The development (Study 1) and validation (Study 2) of a categorical system for the attribution of facial expressions of negative emotions to specific functions. The facial expressions observed inOPDinterviews (OPD-Task-Force 2009) are coded according to the Facial Action Coding System (FACS; Ekman et al. 2002) and attributed to categories of basic emotional displays using EmFACS (Friesen & Ekman 1984). In Study 1 we analyze a partial sample of 20 interviews and postulate 10 categories of functions that can be arranged into three main categories (interactive, self and object). In Study 2 we rate the facial expressions (n=2320) from the OPD interviews (10 minutes each interview) of 80 female subjects (16 healthy, 64 with DSM-IV diagnosis; age: 18-57 years) according to the categorical system and correlate them with problematic relationship experiences (measured with IIP,Horowitz et al. 2000). Functions of negative facial expressions can be attributed reliably and validly with the RFE-Coding System. The attribution of interactive, self-related and object-related functions allows for a deeper understanding of the emotional facial expressions of patients with mental disorders.

  18. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  19. Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-03-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face.

  20. Enhanced subliminal emotional responses to dynamic facial expressions.

    PubMed

    Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi

    2014-01-01

    Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.

  1. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    PubMed

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  2. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  3. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  4. The effect of facial expressions on respirators contact pressures.

    PubMed

    Cai, Mang; Shen, Shengnan; Li, Hui

    2017-08-01

    This study investigated the effect of four typical facial expressions (calmness, happiness, sadness and surprise) on contact characteristics between an N95 filtering facepiece respirator and a headform. The respirator model comprised two layers (an inner layer and an outer layer) and a nose clip. The headform model was comprised of a skin layer, a fatty tissue layer embedded with eight muscles, and a skull layer. Four typical facial expressions were generated by the coordinated contraction of four facial muscles. After that, the distribution of the contact pressure on the headform, as well as the contact area, were calculated. Results demonstrated that the nasal clip could help make the respirator move closer to the nose bridge while causing facial discomfort. Moreover, contact areas varied with different facial expressions, and facial expressions significantly altered contact pressures at different key areas, which may result in leakage.

  5. Macaques can predict social outcomes from facial expressions.

    PubMed

    Waller, Bridget M; Whitehouse, Jamie; Micheletta, Jérôme

    2016-09-01

    There is widespread acceptance that facial expressions are useful in social interactions, but empirical demonstration of their adaptive function has remained elusive. Here, we investigated whether macaques can use the facial expressions of others to predict the future outcomes of social interaction. Crested macaques (Macaca nigra) were shown an approach between two unknown individuals on a touchscreen and were required to choose between one of two potential social outcomes. The facial expressions of the actors were manipulated in the last frame of the video. One subject reached the experimental stage and accurately predicted different social outcomes depending on which facial expressions the actors displayed. The bared-teeth display (homologue of the human smile) was most strongly associated with predicted friendly outcomes. Contrary to our predictions, screams and threat faces were not associated more with conflict outcomes. Overall, therefore, the presence of any facial expression (compared to neutral) caused the subject to choose friendly outcomes more than negative outcomes. Facial expression in general, therefore, indicated a reduced likelihood of social conflict. The findings dispute traditional theories that view expressions only as indicators of present emotion and instead suggest that expressions form part of complex social interactions where individuals think beyond the present.

  6. Natural, but not artificial, facial movements elicit the left visual field bias in infant face scanning

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Wheeler, Andrea; Pascalis, Olivier; Lee, Kang

    2014-01-01

    A left visual field (LVF) bias has been consistently reported in eye movement patterns when adults look at face stimuli, which reflects hemispheric lateralization of face processing and eye movements. However, the emergence of the LVF attentional bias in infancy is less clear. The present study investigated the emergence and development of the LVF attentional bias in infants from 3 to 9 months of age with moving face stimuli. We specifically examined the naturalness of facial movements in infants’ LVF attentional bias by comparing eye movement patterns in naturally and artificially moving faces. Results showed that 3- to 5-month-olds exhibited the LVF attentional bias only in the lower half of naturally moving faces, but not in artificially moving faces. Six- to 9-month-olds showed the LVF attentional bias in both the lower and upper face halves only in naturally moving, but not in artificially moving faces. These results suggest that the LVF attentional bias for face processing may emerge around 3 months of age and is driven by natural facial movements. The LVF attentional bias reflects the role of natural face experience in real life situations that may drive the development of hemispheric lateralization of face processing in infancy. PMID:25064049

  7. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  8. Less Empathic and More Reactive: The Different Impact of Childhood Maltreatment on Facial Mimicry and Vagal Regulation

    PubMed Central

    Ardizzi, Martina; Umiltà, Maria Alessandra; Evangelista, Valentina; Di Liscia, Alessandra; Ravera, Roberto; Gallese, Vittorio

    2016-01-01

    Facial mimicry and vagal regulation represent two crucial physiological responses to others’ facial expressions of emotions. Facial mimicry, defined as the automatic, rapid and congruent electromyographic activation to others’ facial expressions, is implicated in empathy, emotional reciprocity and emotions recognition. Vagal regulation, quantified by the computation of Respiratory Sinus Arrhythmia (RSA), exemplifies the autonomic adaptation to contingent social cues. Although it has been demonstrated that childhood maltreatment induces alterations in the processing of the facial expression of emotions, both at an explicit and implicit level, the effects of maltreatment on children’s facial mimicry and vagal regulation in response to facial expressions of emotions remain unknown. The purpose of the present study was to fill this gap, involving 24 street-children (maltreated group) and 20 age-matched controls (control group). We recorded their spontaneous facial electromyographic activations of corrugator and zygomaticus muscles and RSA responses during the visualization of the facial expressions of anger, fear, joy and sadness. Results demonstrated a different impact of childhood maltreatment on facial mimicry and vagal regulation. Maltreated children did not show the typical positive-negative modulation of corrugator mimicry. Furthermore, when only negative facial expressions were considered, maltreated children demonstrated lower corrugator mimicry than controls. With respect to vagal regulation, whereas maltreated children manifested the expected and functional inverse correlation between RSA value at rest and RSA response to angry facial expressions, controls did not. These results describe an early and divergent functional adaptation to hostile environment of the two investigated physiological mechanisms. On the one side, maltreatment leads to the suppression of the spontaneous facial mimicry normally concurring to empathic understanding of others’ emotions. On the other side, maltreatment forces the precocious development of the functional synchronization between vagal regulation and threatening social cues facilitating the recruitment of fight-or-flight defensive behavioral strategies. PMID:27685802

  9. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    NASA Astrophysics Data System (ADS)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  10. The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression.

    PubMed

    Holmes, Amanda; Winston, Joel S; Eimer, Martin

    2005-10-01

    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.

  11. Tuning to the Positive: Age-Related Differences in Subjective Perception of Facial Emotion

    PubMed Central

    Picardo, Rochelle; Baron, Andrew S.; Anderson, Adam K.; Todd, Rebecca M.

    2016-01-01

    Facial expressions aid social transactions and serve as socialization tools, with smiles signaling approval and reward, and angry faces signaling disapproval and punishment. The present study examined whether the subjective experience of positive vs. negative facial expressions differs between children and adults. Specifically, we examined age-related differences in biases toward happy and angry facial expressions. Young children (5–7 years) and young adults (18–29 years) rated the intensity of happy and angry expressions as well as levels of experienced arousal. Results showed that young children—but not young adults—rated happy facial expressions as both more intense and arousing than angry faces. This finding, which we replicated in two independent samples, was not due to differences in the ability to identify facial expressions, and suggests that children are more tuned to information in positive expressions. Together these studies provide evidence that children see unambiguous adult emotional expressions through rose-colored glasses, and suggest that what is emotionally relevant can shift with development. PMID:26734940

  12. Misinterpretation of facial expression: a cross-cultural study.

    PubMed

    Shioiri, T; Someya, T; Helmeste, D; Tang, S W

    1999-02-01

    Accurately recognizing facial emotional expressions is important in psychiatrist-versus-patient interactions. This might be difficult when the physician and patients are from different cultures. More than two decades of research on facial expressions have documented the universality of the emotions of anger, contempt, disgust, fear, happiness, sadness, and surprise. In contrast, some research data supported the concept that there are significant cultural differences in the judgment of emotion. In this pilot study, the recognition of emotional facial expressions in 123 Japanese subjects was evaluated using the Japanese and Caucasian Facial Expression of Emotion (JACFEE) photos. The results indicated that Japanese subjects experienced difficulties in recognizing some emotional facial expressions and misunderstood others as depicted by the posers, when compared to previous studies using American subjects. Interestingly, the sex and cultural background of the poser did not appear to influence the accuracy of recognition. The data suggest that in this young Japanese sample, judgment of certain emotional facial expressions was significantly different from the Americans. Further exploration in this area is warranted due to its importance in cross-cultural clinician-patient interactions.

  13. Multichannel Communication: The Impact of the Paralinguistic Channel on Facial Expression of Emotion.

    ERIC Educational Resources Information Center

    Brideau, Linda B.; Allen, Vernon L.

    A study was undertaken to examine the impact of the paralinguistic channel on the ability to encode facial expressions of emotion. The first set of subjects, 19 encoders, were asked to encode facial expressions for five emotions (fear, sadness, anger, happiness, and disgust). The emotions were produced in three encoding conditions: facial channel…

  14. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    PubMed

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-04-01

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.

  15. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Identity recognition and happy and sad facial expression recall: influence of depressive symptoms.

    PubMed

    Jermann, Françoise; van der Linden, Martial; D'Argembeau, Arnaud

    2008-05-01

    Relatively few studies have examined memory bias for social stimuli in depression or dysphoria. The aim of this study was to investigate the influence of depressive symptoms on memory for facial information. A total of 234 participants completed the Beck Depression Inventory II and a task examining memory for facial identity and expression of happy and sad faces. For both facial identity and expression, the recollective experience was measured with the Remember/Know/Guess procedure (Gardiner & Richardson-Klavehn, 2000). The results show no major association between depressive symptoms and memory for identities. However, dysphoric individuals consciously recalled (Remember responses) more sad facial expressions than non-dysphoric individuals. These findings suggest that sad facial expressions led to more elaborate encoding, and thereby better recollection, in dysphoric individuals.

  17. [Kinematic movement analyses and their application in psychiatry].

    PubMed

    Juckel, Georg; Mergl, Roland; Hegerl, Ulrich

    2005-04-01

    There is a long tradition to develop valid instruments for the exact assessment of psycho-motor dysfunctions in psychiatry. However, progress is hampered by the complexity of emotionally driven movements in psychiatric patients. Methods used up to now either remains unspecific due to only qualitative measurements or focus on the neurophysiological aspects too much. Thus, the results accomplished so far are only very general unspecific concerning different groups of psychiatric patients. In this paper, two own methods are presented which are aimed to avoid the two poles above mentioned. Kinematic analyses of facial expressions as well as handwriting movements provide quantitative and quite specific informations about psycho-motor dysfunctions of psychiatric patients and the effects of psychotropic substances. Thus, these methods are well suitable for relating them to other neurobiological parameters in order to contribute to the pathophysiological understanding of psycho-motor symptoms in psychiatric patients.

  18. Deficits in Degraded Facial Affect Labeling in Schizophrenia and Borderline Personality Disorder.

    PubMed

    van Dijke, Annemiek; van 't Wout, Mascha; Ford, Julian D; Aleman, André

    2016-01-01

    Although deficits in facial affect processing have been reported in schizophrenia as well as in borderline personality disorder (BPD), these disorders have not yet been directly compared on facial affect labeling. Using degraded stimuli portraying neutral, angry, fearful and angry facial expressions, we hypothesized more errors in labeling negative facial expressions in patients with schizophrenia compared to healthy controls. Patients with BPD were expected to have difficulty in labeling neutral expressions and to display a bias towards a negative attribution when wrongly labeling neutral faces. Patients with schizophrenia (N = 57) and patients with BPD (N = 30) were compared to patients with somatoform disorder (SoD, a psychiatric control group; N = 25) and healthy control participants (N = 41) on facial affect labeling accuracy and type of misattributions. Patients with schizophrenia showed deficits in labeling angry and fearful expressions compared to the healthy control group and patients with BPD showed deficits in labeling neutral expressions compared to the healthy control group. Schizophrenia and BPD patients did not differ significantly from each other when labeling any of the facial expressions. Compared to SoD patients, schizophrenia patients showed deficits on fearful expressions, but BPD did not significantly differ from SoD patients on any of the facial expressions. With respect to the type of misattributions, BPD patients mistook neutral expressions more often for fearful expressions compared to schizophrenia patients and healthy controls, and less often for happy compared to schizophrenia patients. These findings suggest that although schizophrenia and BPD patients demonstrate different as well as similar facial affect labeling deficits, BPD may be associated with a tendency to detect negative affect in neutral expressions.

  19. Autism as a developmental disorder in intentional movement and affective engagement

    PubMed Central

    Trevarthen, Colwyn; Delafield-Butt, Jonathan T.

    2013-01-01

    We review evidence that autistic spectrum disorders have their origin in early prenatal failure of development in systems that program timing, serial coordination and prospective control of movements, and that regulate affective evaluations of experiences. There are effects in early infancy, before medical diagnosis, especially in motor sequencing, selective or exploratory attention, affective expression and intersubjective engagement with parents. These are followed by retardation of cognitive development and language learning in the second or third year, which lead to a diagnosis of ASD. The early signs relate to abnormalities that have been found in brain stem systems and cerebellum in the embryo or early fetal stage, before the cerebral neocortex is functional, and they have clear consequences in infancy when neocortical systems are intensively elaborated. We propose, with evidence of the disturbances of posture, locomotion and prospective motor control in children with autism, as well as of their facial expression of interest and affect, and attention to other persons' expressions, that examination of the psychobiology of motor affective disorders, rather than later developing cognitive or linguistic ones, may facilitate early diagnosis. Research in this area may also explain how intense interaction, imitation or “expressive art” therapies, which respond intimately with motor activities, are effective at later stages. Exceptional talents of some autistic people may be acquired compensations for basic problems with expectant self-regulations of movement, attention and emotion. PMID:23882192

  20. Social exclusion leads to attentional bias to emotional social information: Evidence from eye movement.

    PubMed

    Chen, Zhuohao; Du, Jinchen; Xiang, Min; Zhang, Yan; Zhang, Shuyue

    2017-01-01

    Social exclusion has many effects on individuals, including the increased need to belong and elevated sensitivity to social information. Using a self-reporting method, and an eye-tracking technique, this study explored people's need to belong and attentional bias towards the socio-emotional information (pictures of positive and negative facial expressions compared to those of emotionally-neutral expressions) after experiencing a brief episode of social exclusion. We found that: (1) socially-excluded individuals reported higher negative emotions, lower positive emotions, and stronger need to belong than those who were not socially excluded; (2) compared to a control condition, social exclusion caused a longer response time to probe dots after viewing positive or negative face images; (3) social exclusion resulted in a higher frequency ratio of first attentional fixation on both positive and negative emotional facial pictures (but not on the neutral pictures) than the control condition; (4) in the social exclusion condition, participants showed shorter first fixation latency and longer first fixation duration to positive pictures than neutral ones but this effect was not observed for negative pictures; (5) participants who experienced social exclusion also showed longer gazing duration on the positive pictures than those who did not; although group differences also existed for the negative pictures, the gaze duration bias from both groups showed no difference from chance. This study demonstrated the emotional response to social exclusion as well as characterising multiple eye-movement indicators of attentional bias after experiencing social exclusion.

  1. Social exclusion leads to attentional bias to emotional social information: Evidence from eye movement

    PubMed Central

    Xiang, Min; Zhang, Yan; Zhang, Shuyue

    2017-01-01

    Social exclusion has many effects on individuals, including the increased need to belong and elevated sensitivity to social information. Using a self-reporting method, and an eye-tracking technique, this study explored people’s need to belong and attentional bias towards the socio-emotional information (pictures of positive and negative facial expressions compared to those of emotionally-neutral expressions) after experiencing a brief episode of social exclusion. We found that: (1) socially-excluded individuals reported higher negative emotions, lower positive emotions, and stronger need to belong than those who were not socially excluded; (2) compared to a control condition, social exclusion caused a longer response time to probe dots after viewing positive or negative face images; (3) social exclusion resulted in a higher frequency ratio of first attentional fixation on both positive and negative emotional facial pictures (but not on the neutral pictures) than the control condition; (4) in the social exclusion condition, participants showed shorter first fixation latency and longer first fixation duration to positive pictures than neutral ones but this effect was not observed for negative pictures; (5) participants who experienced social exclusion also showed longer gazing duration on the positive pictures than those who did not; although group differences also existed for the negative pictures, the gaze duration bias from both groups showed no difference from chance. This study demonstrated the emotional response to social exclusion as well as characterising multiple eye-movement indicators of attentional bias after experiencing social exclusion. PMID:29040279

  2. Developmental changes in emotion recognition from full-light and point-light displays of body movement.

    PubMed

    Ross, Patrick D; Polson, Louise; Grosbras, Marie-Hélène

    2012-01-01

    To date, research on the development of emotion recognition has been dominated by studies on facial expression interpretation; very little is known about children's ability to recognize affective meaning from body movements. In the present study, we acquired simultaneous video and motion capture recordings of two actors portraying four basic emotions (Happiness Sadness, Fear and Anger). One hundred and seven primary and secondary school children (aged 4-17) and 14 adult volunteers participated in the study. Each participant viewed the full-light and point-light video clips and was asked to make a forced-choice as to which emotion was being portrayed. As a group, children performed worse than adults for both point-light and full-light conditions. Linear regression showed that both age and lighting condition were significant predictors of performance in children. Using piecewise regression, we found that a bilinear model with a steep improvement in performance until 8.5 years of age, followed by a much slower improvement rate through late childhood and adolescence best explained the data. These findings confirm that, like for facial expression, adolescents' recognition of basic emotions from body language is not fully mature and seems to follow a non-linear development. This is in line with observations of non-linear developmental trajectories for different aspects of human stimuli processing (voices and faces), perhaps suggesting a shift from one perceptual or cognitive strategy to another during adolescence. These results have important implications to understanding the maturation of social cognition.

  3. Behind the Robot's Smiles and Frowns: In Social Context, People Do Not Mirror Android's Expressions But React to Their Informational Value.

    PubMed

    Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S; Winkielman, Piotr

    2018-01-01

    Facial actions are key elements of non-verbal behavior. Perceivers' reactions to others' facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants' reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants' facial responses to android's expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants' responses to the game outcome were similar regardless of whether it was conveyed via the android's smile or frown. Furthermore, the outcome had greater impact on people's facial reactions when it was conveyed through android's face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions.

  4. Facial Expression Enhances Emotion Perception Compared to Vocal Prosody: Behavioral and fMRI Studies.

    PubMed

    Zhang, Heming; Chen, Xuhai; Chen, Shengdong; Li, Yansong; Chen, Changming; Long, Quanshan; Yuan, Jiajin

    2018-05-09

    Facial and vocal expressions are essential modalities mediating the perception of emotion and social communication. Nonetheless, currently little is known about how emotion perception and its neural substrates differ across facial expression and vocal prosody. To clarify this issue, functional MRI scans were acquired in Study 1, in which participants were asked to discriminate the valence of emotional expression (angry, happy or neutral) from facial, vocal, or bimodal stimuli. In Study 2, we used an affective priming task (unimodal materials as primers and bimodal materials as target) and participants were asked to rate the intensity, valence, and arousal of the targets. Study 1 showed higher accuracy and shorter response latencies in the facial than in the vocal modality for a happy expression. Whole-brain analysis showed enhanced activation during facial compared to vocal emotions in the inferior temporal-occipital regions. Region of interest analysis showed a higher percentage signal change for facial than for vocal anger in the superior temporal sulcus. Study 2 showed that facial relative to vocal priming of anger had a greater influence on perceived emotion for bimodal targets, irrespective of the target valence. These findings suggest that facial expression is associated with enhanced emotion perception compared to equivalent vocal prosodies.

  5. Developing an eBook-Integrated High-Fidelity Mobile App Prototype for Promoting Child Motor Skills and Taxonomically Assessing Children’s Emotional Responses Using Face and Sound Topology

    PubMed Central

    Brown, William; Liu, Connie; John, Rita Marie; Ford, Phoebe

    2014-01-01

    Developing gross and fine motor skills and expressing complex emotion is critical for child development. We introduce “StorySense”, an eBook-integrated mobile app prototype that can sense face and sound topologies and identify movement and expression to promote children’s motor skills and emotional developmental. Currently, most interactive eBooks on mobile devices only leverage “low-motor” interaction (i.e. tapping or swiping). Our app senses a greater breath of motion (e.g. clapping, snapping, and face tracking), and dynamically alters the storyline according to physical responses in ways that encourage the performance of predetermined motor skills ideal for a child’s gross and fine motor development. In addition, our app can capture changes in facial topology, which can later be mapped using the Facial Action Coding System (FACS) for later interpretation of emotion. StorySense expands the human computer interaction vocabulary for mobile devices. Potential clinical applications include child development, physical therapy, and autism. PMID:25954336

  6. The clinical effects of music therapy in palliative medicine.

    PubMed

    Gallagher, Lisa M; Lagman, Ruth; Walsh, Declan; Davis, Mellar P; Legrand, Susan B

    2006-08-01

    This study was to objectively assess the effect of music therapy on patients with advanced disease. Two hundred patients with chronic and/or advanced illnesses were prospectively evaluated. The effects of music therapy on these patients are reported. Visual analog scales, the Happy/Sad Faces Assessment Tool, and a behavior scale recorded pre- and post-music therapy scores on standardized data collection forms. A computerized database was used to collect and analyze the data. Utilizing the Wilcoxon signed rank test and a paired t test, music therapy improved anxiety, body movement, facial expression, mood, pain, shortness of breath, and verbalizations. Sessions with family members were also evaluated, and music therapy improved families' facial expressions, mood, and verbalizations. All improvements were statistically significant (P<0.001). Most patients and families had a positive subjective and objective response to music therapy. Objective data were obtained for a large number of patients with advanced disease. This is a significant addition to the quantitative literature on music therapy in this unique patient population. Our results suggest that music therapy is invaluable in palliative medicine.

  7. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    PubMed

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  8. Sex differences in facial emotion recognition across varying expression intensity levels from videos

    PubMed Central

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674

  9. Emotional facial expressions during REM sleep dreams.

    PubMed

    Rivera-García, Ana P; López Ruiz, Irma E; Ramírez-Salado, Ignacio; González-Olvera, Jorge J; Ayala-Guerrero, Fructuoso; Jiménez-Anguiano, Anabel

    2018-06-04

    Although motor activity is actively inhibited during rapid eye movement (REM) sleep, specific activations of the facial mimetic musculature have been observed during this stage, which may be associated with greater emotional dream mentation. Nevertheless, no specific biomarker of emotional valence or arousal related to dream content has been identified to date. In order to explore the electromyographic (EMG) activity (voltage, number, density and duration) of the corrugator and zygomaticus major muscles during REM sleep and its association with emotional dream mentation, this study performed a series of experimental awakenings after observing EMG facial activations during REM sleep. The study was performed with 12 healthy female participants using an 8-hr nighttime sleep recording. Emotional tone was evaluated by five blinded judges and final valence and intensity scores were obtained. Emotions were mentioned in 80.4% of dream reports. The voltage, number, density and duration of facial muscle contractions were greater for the corrugator muscle than for the zygomaticus muscle, whereas high positive emotions predicted the number (R 2 0.601, p = 0.0001) and voltage (R 2 0.332, p = 0.005) of the zygomaticus. Our findings suggest that zygomaticus events were predictive of the experience of positive affect during REM sleep in healthy women. © 2018 European Sleep Research Society.

  10. Not on the Face Alone: Perception of Contextualized Face Expressions in Huntington's Disease

    ERIC Educational Resources Information Center

    Aviezer, Hillel; Bentin, Shlomo; Hassin, Ran R.; Meschino, Wendy S.; Kennedy, Jeanne; Grewal, Sonya; Esmail, Sherali; Cohen, Sharon; Moscovitch, Morris

    2009-01-01

    Numerous studies have demonstrated that Huntington's disease mutation-carriers have deficient explicit recognition of isolated facial expressions. There are no studies, however, which have investigated the recognition of facial expressions embedded within an emotional body and scene context. Real life facial expressions are typically embedded in…

  11. Botulinum toxin and the facial feedback hypothesis: can looking better make you feel happier?

    PubMed

    Alam, Murad; Barrett, Karen C; Hodapp, Robert M; Arndt, Kenneth A

    2008-06-01

    The facial feedback hypothesis suggests that muscular manipulations which result in more positive facial expressions may lead to more positive emotional states in affected individuals. In this essay, we hypothesize that the injection of botulinum toxin for upper face dynamic creases might induce positive emotional states by reducing the ability to frown and create other negative facial expressions. The use of botulinum toxin to pharmacologically alter upper face muscular expressiveness may curtail the appearance of negative emotions, most notably anger, but also fear and sadness. This occurs via the relaxation of the corrugator supercilii and the procerus, which are responsible for brow furrowing, and to a lesser extent, because of the relaxation of the frontalis. Concurrently, botulinum toxin may dampen some positive expressions like the true smile, which requires activity of the orbicularis oculi, a muscle also relaxed after toxin injections. On balance, the evidence suggests that botulinum toxin injections for upper face dynamic creases may reduce negative facial expressions more than they reduce positive facial expressions. Based on the facial feedback hypothesis, this net change in facial expression may potentially have the secondary effect of reducing the internal experience of negative emotions, thus making patients feel less angry, sad, and fearful.

  12. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    ERIC Educational Resources Information Center

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  13. Emotion understanding in postinstitutionalized Eastern European children

    PubMed Central

    WISMER FRIES, ALISON B.; POLLAK, SETH D.

    2005-01-01

    To examine the effects of early emotional neglect on children’s affective development, we assessed children who had experienced institutionalized care prior to adoption into family environments. One task required children to identify photographs of facial expressions of emotion. A second task required children to match facial expressions to an emotional situation. Internationally adopted, postinstitutionalized children had difficulty identifying facial expressions of emotion. In addition, postinstitutionalized children had significant difficulty matching appropriate facial expressions to happy, sad, and fearful scenarios. However, postinstitutionalized children performed as well as comparison children when asked to identify and match angry facial expressions. These results are discussed in terms of the importance of emotional input early in life on later developmental organization. PMID:15487600

  14. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    PubMed

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  15. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults

    PubMed Central

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415

  16. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  17. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition

    PubMed Central

    Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans

    2015-01-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602

  18. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  19. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  20. Deliberately generated and imitated facial expressions of emotions in people with eating disorders.

    PubMed

    Dapelo, Marcela Marin; Bodas, Sergio; Morris, Robin; Tchanturia, Kate

    2016-02-01

    People with eating disorders have difficulties in socio emotional functioning that could contribute to maintaining the functional consequences of the disorder. This study aimed to explore the ability to deliberately generate (i.e., pose) and imitate facial expressions of emotions in women with anorexia (AN) and bulimia nervosa (BN), compared to healthy controls (HC). One hundred and three participants (36 AN, 25 BN, and 42 HC) were asked to pose and imitate facial expressions of anger, disgust, fear, happiness, and sadness. Their facial expressions were recorded and coded. Participants with eating disorders (both AN and BN) were less accurate than HC when posing facial expressions of emotions. Participants with AN were less accurate compared to HC imitating facial expressions, whilst BN participants had a middle range performance. All results remained significant after controlling for anxiety, depression and autistic features. The relatively small number of BN participants recruited for this study. The study findings suggest that people with eating disorders, particularly those with AN, have difficulties posing and imitating facial expressions of emotions. These difficulties could have an impact in social communication and social functioning. This is the first study to investigate the ability to pose and imitate facial expressions of emotions in people with eating disorders, and the findings suggest this area should be further explored in future studies. Copyright © 2015. Published by Elsevier B.V.

  1. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Facioscapulohumeral muscular dystrophy

    MedlinePlus

    ... due to weakness of the cheek muscles Decreased facial expression due to weakness of facial muscles Depressed or angry facial expression Difficulty pronouncing words Difficulty reaching above the shoulder ...

  3. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    PubMed

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives.

  4. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  5. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  6. Abnormalities in whisking behaviour are associated with lesions in brain stem nuclei in a mouse model of amyotrophic lateral sclerosis.

    PubMed

    Grant, Robyn A; Sharp, Paul S; Kennerley, Aneurin J; Berwick, Jason; Grierson, Andrew; Ramesh, Tennore; Prescott, Tony J

    2014-02-01

    The transgenic SOD1(G93A) mouse is a model of human amyotrophic lateral sclerosis (ALS) and recapitulates many of the pathological hallmarks observed in humans, including motor neuron degeneration in the brain and the spinal cord. In mice, neurodegeneration particularly impacts on the facial nuclei in the brainstem. Motor neurons innervating the whisker pad muscles originate in the facial nucleus of the brain stem, with contractions of these muscles giving rise to "whisking" one of the fastest movements performed by mammals. A longitudinal study was conducted on SOD1(G93A) mice and wild-type litter mate controls, comparing: (i) whisker movements using high-speed video recordings and automated whisker tracking, and (ii) facial nucleus degeneration using MRI. Results indicate that while whisking still occurs in SOD1(G93A) mice and is relatively resistant to neurodegeneration, there are significant disruptions to certain whisking behaviours, which correlate with facial nuclei lesions, and may be as a result of specific facial muscle degeneration. We propose that measures of mouse whisker movement could potentially be used in tandem with measures of limb dysfunction as biomarkers of disease onset and progression in ALS mice and offers a novel method for testing the efficacy of novel therapeutic compounds. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Facial expression recognition based on improved deep belief networks

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  8. Patterns of Emotion Experiences as Predictors of Facial Expressions of Emotion.

    ERIC Educational Resources Information Center

    Blumberg, Samuel H.; Izard, Carroll E.

    1991-01-01

    Examined the relations between emotion and facial expressions of emotion in 8- to 12-year-old male psychiatric patients. Results indicated that patterns or combinations of emotion experiences had an impact on facial expressions of emotion. (Author/BB)

  9. Neuropsychological Studies of Linguistic and Affective Facial Expressions in Deaf Signers.

    ERIC Educational Resources Information Center

    Corina, David P.; Bellugi, Ursula; Reilly, Judy

    1999-01-01

    Presents two studies that explore facial expression production in deaf signers. An experimental paradigm uses chimeric stimuli of American Sign Language linguistic and facial expressions to explore patterns of productive asymmetries in brain-intact signers. (Author/VWL)

  10. The influence of communicative relations on facial responses to pain: Does it matter who is watching?

    PubMed Central

    Karmann, Anna J; Lautenbacher, Stefan; Bauer, Florian; Kunz, Miriam

    2014-01-01

    BACKGROUND: Facial responses to pain are believed to be an act of communication and, as such, are likely to be affected by the relationship between sender and receiver. OBJECTIVES: To investigate this effect by examining the impact that variations in communicative relations (from being alone to being with an intimate other) have on the elements of the facial language used to communicate pain (types of facial responses), and on the degree of facial expressiveness. METHODS: Facial responses of 126 healthy participants to phasic heat pain were assessed in three different social situations: alone, but aware of video recording; in the presence of an experimenter; and in the presence of an intimate other. Furthermore, pain catastrophizing and sex (of participant and experimenter) were considered as additional influences. RESULTS: Whereas similar types of facial responses were elicited independent of the relationship between sender and observer, the degree of facial expressiveness varied significantly, with increased expressiveness occurring in the presence of the partner. Interestingly, being with an experimenter decreased facial expressiveness only in women. Pain catastrophizing and the sex of the experimenter exhibited no substantial influence on facial responses. CONCLUSION: Variations in communicative relations had no effect on the elements of the facial pain language. The degree of facial expressiveness, however, was adapted to the relationship between sender and observer. Individuals suppressed their facial communication of pain toward unfamiliar persons, whereas they overtly displayed it in the presence of an intimate other. Furthermore, when confronted with an unfamiliar person, different situational demands appeared to apply for both sexes. PMID:24432350

  11. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Impact of visual learning on facial expressions of physical distress: a study on voluntary and evoked expressions of pain in congenitally blind and sighted individuals.

    PubMed

    Kunz, Miriam; Faltermeier, Nicole; Lautenbacher, Stefan

    2012-02-01

    The ability to facially communicate physical distress (e.g. pain) can be essential to ensure help, support and clinical treatment for the individual experiencing physical distress. So far, it is not known to which degree this ability represents innate and biologically prepared programs or whether it requires visual learning. Here, we address this question by studying evoked and voluntary facial expressions of pain in congenitally blind (N=21) and sighted (N=42) individuals. The repertoire of evoked facial expressions was comparable in congenitally blind and sighted individuals; however, blind individuals were less capable of facially encoding different intensities of experimental pain. Moreover, blind individuals were less capable of voluntarily modulating their pain expression. We conclude that the repertoire of facial muscles being activated during pain is biologically prepared. However, visual learning is a prerequisite in order to encode different intensities of physical distress as well as for up- and down-regulation of one's facial expression. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Intact Rapid Facial Mimicry as well as Generally Reduced Mimic Responses in Stable Schizophrenia Patients

    PubMed Central

    Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute

    2016-01-01

    Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335

  14. Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    PubMed Central

    Rigoulot, Simon; Pell, Marc D.

    2012-01-01

    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454

  15. Facial expressions of emotion and the course of conjugal bereavement.

    PubMed

    Bonanno, G A; Keltner, D

    1997-02-01

    The common assumption that emotional expression mediates the course of bereavement is tested. Competing hypotheses about the direction of mediation were formulated from the grief work and social-functional accounts of emotional expression. Facial expressions of emotion in conjugally bereaved adults were coded at 6 months post-loss as they described their relationship with the deceased; grief and perceived health were measured at 6, 14, and 25 months. Facial expressions of negative emotion, in particular anger, predicted increased grief at 14 months and poorer perceived health through 25 months. Facial expressions of positive emotion predicted decreased grief through 25 months and a positive but nonsignificant relation to perceived health. Predictive relations between negative and positive emotional expression persisted when initial levels of self-reported emotion, grief, and health were statistically controlled, demonstrating the mediating role of facial expressions of emotion in adjustment to conjugal loss. Theoretical and clinical implications are discussed.

  16. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    ERIC Educational Resources Information Center

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  17. Realistic prediction of individual facial emotion expressions for craniofacial surgery simulations

    NASA Astrophysics Data System (ADS)

    Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian

    2003-05-01

    In addition to the static soft tissue prediction, the estimation of individual facial emotion expressions is an important criterion for the evaluation of the carniofacial surgery planning. In this paper, we present an approach for the estimation of individual facial emotion expressions on the basis of geometrical models of human anatomy derived from tomographic data and the finite element modeling of facial tissue biomechanics.

  18. A Web-based Game for Teaching Facial Expressions to Schizophrenic Patients.

    PubMed

    Gülkesen, Kemal Hakan; Isleyen, Filiz; Cinemre, Buket; Samur, Mehmet Kemal; Sen Kaya, Semiha; Zayim, Nese

    2017-07-12

    Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient's daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games. To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia. We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients. The median score of the correct answers was 17.5 in the control group whereas it was 16.5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (p<0.001). Computer games may be used for the purpose of educating people who have difficulty in recognizing facial expressions.

  19. Cyclin D1 expression and facial function outcome after vestibular schwannoma surgery.

    PubMed

    Lassaletta, Luis; Del Rio, Laura; Torres-Martin, Miguel; Rey, Juan A; Patrón, Mercedes; Madero, Rosario; Roda, Jose Maria; Gavilan, Javier

    2011-01-01

    The proto-oncogen cyclin D1 has been implicated in the development and behavior of vestibular schwannoma. This study evaluates the association between cyclin D1 expression and other known prognostic factors in facial function outcome 1 year after vestibular schwannoma surgery. Sixty-four patients undergoing surgery for vestibular schwannoma were studied. Immunohistochemistry analysis was performed with anticyclin D1 in all cases. Cyclin D1 expression, as well as other demographic, clinical, radiologic, and intraoperative data, was correlated with 1-year postoperative facial function. Good 1-year facial function (Grades 1-2) was achieved in 73% of cases. Cyclin D1 expression was found in 67% of the tumors. Positive cyclin D1 staining was more frequent in patients with Grades 1 to 2 (75%) than in those with Grades 3 to 6 (25%). Other significant variables were tumor volume and facial nerve stimulation after tumor resection. The area under the receiver operating characteristics curve increased when adding cyclin D1 expression to the multivariate model. Cyclin D1 expression is associated to facial outcome after vestibular schwannoma surgery. The prognostic value of cyclin D1 expression is independent of tumor size and facial nerve stimulation at the end of surgery.

  20. Developmental Changes in Infants' Categorization of Anger and Disgust Facial Expressions

    ERIC Educational Resources Information Center

    Ruba, Ashley L.; Johnson, Kristin M.; Harris, Lasana T.; Wilbourn, Makeba Parramore

    2017-01-01

    For decades, scholars have examined how children first recognize emotional facial expressions. This research has found that infants younger than 10 months can discriminate negative, within-valence facial expressions in looking time tasks, and children older than 24 months struggle to categorize these expressions in labeling and free-sort tasks.…

  1. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    PubMed

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).

    PubMed

    Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M

    2017-02-01

    Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.

    PubMed

    Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito

    2017-12-01

    Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or <0.5. We observed that the gene expression in Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.

  4. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  5. Facial mimicry in its social setting

    PubMed Central

    Seibt, Beate; Mühlberger, Andreas; Likowski, Katja U.; Weyers, Peter

    2015-01-01

    In interpersonal encounters, individuals often exhibit changes in their own facial expressions in response to emotional expressions of another person. Such changes are often called facial mimicry. While this tendency first appeared to be an automatic tendency of the perceiver to show the same emotional expression as the sender, evidence is now accumulating that situation, person, and relationship jointly determine whether and for which emotions such congruent facial behavior is shown. We review the evidence regarding the moderating influence of such factors on facial mimicry with a focus on understanding the meaning of facial responses to emotional expressions in a particular constellation. From this, we derive recommendations for a research agenda with a stronger focus on the most common forms of encounters, actual interactions with known others, and on assessing potential mediators of facial mimicry. We conclude that facial mimicry is modulated by many factors: attention deployment and sensitivity, detection of valence, emotional feelings, and social motivations. We posit that these are the more proximal causes of changes in facial mimicry due to changes in its social setting. PMID:26321970

  6. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  7. Behind the Robot’s Smiles and Frowns: In Social Context, People Do Not Mirror Android’s Expressions But React to Their Informational Value

    PubMed Central

    Hofree, Galit; Ruvolo, Paul; Reinert, Audrey; Bartlett, Marian S.; Winkielman, Piotr

    2018-01-01

    Facial actions are key elements of non-verbal behavior. Perceivers’ reactions to others’ facial expressions often represent a match or mirroring (e.g., they smile to a smile). However, the information conveyed by an expression depends on context. Thus, when shown by an opponent, a smile conveys bad news and evokes frowning. The availability of anthropomorphic agents capable of facial actions raises the question of how people respond to such agents in social context. We explored this issue in a study where participants played a strategic game with or against a facially expressive android. Electromyography (EMG) recorded participants’ reactions over zygomaticus muscle (smiling) and corrugator muscle (frowning). We found that participants’ facial responses to android’s expressions reflect their informational value, rather than a direct match. Overall, participants smiled more, and frowned less, when winning than losing. Critically, participants’ responses to the game outcome were similar regardless of whether it was conveyed via the android’s smile or frown. Furthermore, the outcome had greater impact on people’s facial reactions when it was conveyed through android’s face than a computer screen. These findings demonstrate that facial actions of artificial agents impact human facial responding. They also suggest a sophistication in human-robot communication that highlights the signaling value of facial expressions. PMID:29740307

  8. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  9. Influence of Objective Three-Dimensional Measures and Movement Images on Surgeon Treatment Planning for Lip Revision Surgery

    PubMed Central

    Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.

    2013-01-01

    Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676

  10. Availability of latissimus dorsi minigraft in smile reconstruction for incomplete facial paralysis: quantitative assessment based on the optical flow method.

    PubMed

    Takushima, Akihiko; Harii, Kiyonori; Okazaki, Mutsumi; Ohura, Norihiko; Asato, Hirotaka

    2009-04-01

    Acute unilateral facial paralysis, such as occurs in Bell palsy and Hunt syndrome, is mostly a benign neurologic morbidity that resolves within a few months. However, incomplete or misdirected return of the affected nerve results in unfavorable cosmetic sequelae in some patients. Although functional problems such as lagophthalmos are rare, facial asymmetry on smiling resulting from a lack of mimetic muscle strength in the cheek is often psychologically annoying to patients. To obtain a more natural smile, the authors transfer latissimus dorsi muscle to assist in cheek movement. A small, thinned muscle (mini-latissimus dorsi) is sufficient for transplant in this situation. In this study, 96 patients with incomplete facial paralysis who underwent mini-latissimus dorsi transfer were examined. In this series, along with evaluation using the grading scale used in previous reports, preoperative and postoperative videos of 30 patients were analyzed for quantitative assessment using newly developed computer software. Temporary deterioration of paralysis was recognized in three cases but did not last more than a few months. Signs of transferred muscle contraction were recorded after 4 to 12 months among 91 patients. No apparent clinical signs of contraction were recognized in one patient, and four patients could not be followed postoperatively. The synchronized ratio of vertical movement and the symmetrical ratio of horizontal movement both in the cheek and in the lower lip between healthy and paralyzed sides among 30 patients were statistically improved. Statistical analysis using newly developed computer software revealed that a more symmetrical smile can be achieved by muscle transfer among patients with incomplete facial paralysis. Mini-latissimus dorsi transfer can avoid postoperative muscle bulkiness of the cheek and can achieve more natural cheek movement.

  11. Sex differences in attention to disgust facial expressions.

    PubMed

    Kraines, Morganne A; Kelberer, Lucas J A; Wells, Tony T

    2017-12-01

    Research demonstrates that women experience disgust more readily and with more intensity than men. The experience of disgust is associated with increased attention to disgust-related stimuli, but no prior study has examined sex differences in attention to disgust facial expressions. We hypothesised that women, compared to men, would demonstrate increased attention to disgust facial expressions. Participants (n = 172) completed an eye tracking task to measure visual attention to emotional facial expressions. Results indicated that women spent more time attending to disgust facial expressions compared to men. Unexpectedly, we found that men spent significantly more time attending to neutral faces compared to women. The findings indicate that women's increased experience of emotional disgust also extends to attention to disgust facial stimuli. These findings may help to explain sex differences in the experience of disgust and in diagnoses of anxiety disorders in which disgust plays an important role.

  12. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  13. Parameterized Facial Expression Synthesis Based on MPEG-4

    NASA Astrophysics Data System (ADS)

    Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos

    2002-12-01

    In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

  14. Multimedia Content Development as a Facial Expression Datasets for Recognition of Human Emotions

    NASA Astrophysics Data System (ADS)

    Mamonto, N. E.; Maulana, H.; Liliana, D. Y.; Basaruddin, T.

    2018-02-01

    Datasets that have been developed before contain facial expression from foreign people. The development of multimedia content aims to answer the problems experienced by the research team and other researchers who will conduct similar research. The method used in the development of multimedia content as facial expression datasets for human emotion recognition is the Villamil-Molina version of the multimedia development method. Multimedia content developed with 10 subjects or talents with each talent performing 3 shots with each capturing talent having to demonstrate 19 facial expressions. After the process of editing and rendering, tests are carried out with the conclusion that the multimedia content can be used as a facial expression dataset for recognition of human emotions.

  15. Unattractive infant faces elicit negative affect from adults

    PubMed Central

    Schein, Stevie S.; Langlois, Judith H.

    2015-01-01

    We examined the relationship between infant attractiveness and adult affect by investigating whether differing levels of infant facial attractiveness elicit facial muscle movement correlated with positive and negative affect from adults (N = 87) using electromyography. Unattractive infant faces evoked significantly more corrugator supercilii and levator labii superioris movement (physiological correlates of negative affect) than attractive infant faces. These results suggest that unattractive infants may be at risk for negative affective responses from adults, though the relationship between those responses and caregiving behavior remains elusive. PMID:25658199

  16. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  17. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  18. Mothers' pupillary responses to infant facial expressions.

    PubMed

    Yrttiaho, Santeri; Niehaus, Dana; Thomas, Eileen; Leppänen, Jukka M

    2017-02-06

    Human parental care relies heavily on the ability to monitor and respond to a child's affective states. The current study examined pupil diameter as a potential physiological index of mothers' affective response to infant facial expressions. Pupillary time-series were measured from 86 mothers of young infants in response to an array of photographic infant faces falling into four emotive categories based on valence (positive vs. negative) and arousal (mild vs. strong). Pupil dilation was highly sensitive to the valence of facial expressions, being larger for negative vs. positive facial expressions. A separate control experiment with luminance-matched non-face stimuli indicated that the valence effect was specific to facial expressions and cannot be explained by luminance confounds. Pupil response was not sensitive to the arousal level of facial expressions. The results show the feasibility of using pupil diameter as a marker of mothers' affective responses to ecologically valid infant stimuli and point to a particularly prompt maternal response to infant distress cues.

  19. Gaze behavior predicts memory bias for angry facial expressions in stable dysphoria.

    PubMed

    Wells, Tony T; Beevers, Christopher G; Robison, Adrienne E; Ellis, Alissa J

    2010-12-01

    Interpersonal theories suggest that depressed individuals are sensitive to signs of interpersonal rejection, such as angry facial expressions. The present study examined memory bias for happy, sad, angry, and neutral facial expressions in stably dysphoric and stably nondysphoric young adults. Participants' gaze behavior (i.e., fixation duration, number of fixations, and distance between fixations) while viewing these facial expressions was also assessed. Using signal detection analyses, the dysphoric group had better accuracy on a surprise recognition task for angry faces than the nondysphoric group. Further, mediation analyses indicated that greater breadth of attentional focus (i.e., distance between fixations) accounted for enhanced recall of angry faces among the dysphoric group. There were no differences between dysphoria groups in gaze behavior or memory for sad, happy, or neutral facial expressions. Findings from this study identify a specific cognitive mechanism (i.e., breadth of attentional focus) that accounts for biased recall of angry facial expressions in dysphoria. This work also highlights the potential for integrating cognitive and interpersonal theories of depression.

  20. Sad Facial Expressions Increase Choice Blindness

    PubMed Central

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2018-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926

  1. Sad Facial Expressions Increase Choice Blindness.

    PubMed

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  2. Perception of face and body expressions using electromyography, pupillometry and gaze measures.

    PubMed

    Kret, Mariska E; Stekelenburg, Jeroen J; Roelofs, Karin; de Gelder, Beatrice

    2013-01-01

    Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants' fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion.

  3. Perception of Face and Body Expressions Using Electromyography, Pupillometry and Gaze Measures

    PubMed Central

    Kret, Mariska E.; Stekelenburg, Jeroen J.; Roelofs, Karin; de Gelder, Beatrice

    2013-01-01

    Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion. PMID:23403886

  4. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    ERIC Educational Resources Information Center

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  5. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    PubMed Central

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  6. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  7. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  8. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    PubMed

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.

  9. A differential neural response to threatening and non-threatening negative facial expressions in paranoid and non-paranoid schizophrenics.

    PubMed

    Phillips, M L; Williams, L; Senior, C; Bullmore, E T; Brammer, M J; Andrew, C; Williams, S C; David, A S

    1999-11-08

    Several studies have demonstrated impaired facial expression recognition in schizophrenia. Few have examined the neural basis for this; none have compared the neural correlates of facial expression perception in different schizophrenic patient subgroups. We compared neural responses to facial expressions in 10 right-handed schizophrenic patients (five paranoid and five non-paranoid) and five normal volunteers using functional Magnetic Resonance Imaging (fMRI). In three 5-min experiments, subjects viewed alternating 30-s blocks of black-and-white facial expressions of either fear, anger or disgust contrasted with expressions of mild happiness. After scanning, subjects categorised each expression. All patients were less accurate in identifying expressions, and showed less activation to these stimuli than normals. Non-paranoids performed poorly in the identification task and failed to activate neural regions that are normally linked with perception of these stimuli. They categorised disgust as either anger or fear more frequently than paranoids, and demonstrated in response to disgust expressions activation in the amygdala, a region associated with perception of fearful faces. Paranoids were more accurate in recognising expressions, and demonstrated greater activation than non-paranoids to most stimuli. We provide the first evidence for a distinction between two schizophrenic patient subgroups on the basis of recognition of and neural response to different negative facial expressions.

  10. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.

  11. Deficits in motor abilities and developmental fractionation of imitation performance in high-functioning autism spectrum disorders.

    PubMed

    Biscaldi, Monica; Rauh, Reinhold; Irion, Lisa; Jung, Nikolai H; Mall, Volker; Fleischhaker, Christian; Klein, Christoph

    2014-07-01

    The co-occurrence of motor and imitation disabilities often characterises the spectrum of deficits seen in patients with autism spectrum disorders (ASD). Whether these seemingly separate deficits are inter-related and whether, in particular, motor deficits contribute to the expression of imitation deficits is the topic of the present study and was investigated by comparing these deficits' cross-sectional developmental trajectories. To that end, different components of motor performance assessed in the Zurich Neuromotor Assessment and imitation abilities for facial movements and non-meaningful gestures were tested in 70 subjects (aged 6-29 years), including 36 patients with high-functioning ASD and 34 age-matched typically developed (TD) participants. The results show robust deficits in probands with ASD in timed motor performance and in the quality of movement, which are all independent of age, with one exception. Only diadochokinesis improves moderately with increasing age in ASD probands. Imitation of facial movements and of non-meaningful hand, finger, hand finger gestures not related to social context or tool use is also impaired in ASD subjects, but in contrast to motor performance this deficit overall improves with age. A general imitation factor, extracted from the highly inter-correlated imitation tests, is differentially correlated with components of neuromotor performance in ASD and TD participants. By developmentally fractionating developmentally stable motor deficits from developmentally dynamic imitation deficits, we infer that imitation deficits are primarily cognitive in nature.

  12. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  13. (abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, Kenneth C.

    1994-01-01

    We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.

  14. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    PubMed

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  15. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  16. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    PubMed Central

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335

  17. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.

    2014-01-01

    Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859

  18. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    PubMed Central

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients showed difficulties in recognizing emotional facial expressions produced by others and in posing facial emotional expressions compared to healthy subjects. The linear correlation between recognition and expression in both experimental groups suggests that the two mechanisms share a common system, which could be deteriorated in patients with PD. These results open new clinical and rehabilitation perspectives. PMID:28068393

  19. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  20. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study.

    PubMed

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    "Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters.

  1. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study

    PubMed Central

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    “Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206

  2. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    PubMed

    Owada, Keiho; Kojima, Masaki; Yassin, Walid; Kuroda, Miho; Kawakubo, Yuki; Kuwabara, Hitoshi; Kano, Yukiko; Yamasue, Hidenori

    2018-01-01

    To develop novel interventions for autism spectrum disorder (ASD) core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS) were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD) individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR < 0.05) with less variation in Neutral expression (d = 1.08, P = 0.003, PFDR < 0.05). Their expressions were also less "Happy" (d = -0.78, P = 0.038, PFDR > 0.05) with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR < 0.05). Moreover, the stronger Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042). These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  3. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders.

    PubMed

    Sato, Wataru; Toichi, Motomi; Uono, Shota; Kochiyama, Takanori

    2012-08-13

    Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD.We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex-MTG-IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  4. An optimized ERP brain-computer interface based on facial expression changes.

    PubMed

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  5. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  6. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    PubMed

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (P<0.05). The ASD group also had a significantly lower overall accuracy rate of facial expression recognition and significantly lower accuracy rates of the recognition of happy, angry, sad, and frightened expressions than the control group (P<0.05). In the ASD group, the overall accuracy rate of facial expression recognition and the accuracy rates of the recognition of happy and frightened expressions were positively correlated with PRI (r=0.415, 0.455, and 0.393 respectively; P<0.05). The accuracy rate of the recognition of angry expression was positively correlated with WMI (r=0.397; P<0.05). ASD children have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  7. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  8. Outcomes of Direct Facial-to-Hypoglossal Neurorrhaphy with Parotid Release

    PubMed Central

    Jacobson, Joel; Rihani, Jordan; Lin, Karen; Miller, Phillip J.; Roland, J. Thomas

    2010-01-01

    Lesions of the temporal bone and cerebellopontine angle and their management can result in facial nerve paralysis. When the nerve deficit is not amenable to primary end-to-end repair or interpositional grafting, nerve transposition can be used to accomplish the goals of restoring facial tone, symmetry, and voluntary movement. The most widely used nerve transposition is the hypoglossal-facial nerve anastamosis, of which there are several technical variations. Previously we described a technique of single end-to-side anastamosis using intratemporal facial nerve mobilization and parotid release. This study further characterizes the results of this technique with a larger patient cohort and longer-term follow-up. The design of this study is a retrospective chart review and the setting is an academic tertiary care referral center. Twenty-one patients with facial nerve paralysis from proximal nerve injury at the cerebellopontine angle underwent facial-hypoglossal neurorraphy with parotid release. Outcomes were assessed using the Repaired Facial Nerve Recovery Scale, questionnaires, and patient photographs. Of the 21 patients, 18 were successfully reinnervated to a score of a B or C on the recovery scale, which equates to good oral and ocular sphincter closure with minimal mass movement. The mean duration of paralysis between injury and repair was 12.1 months (range 0 to 36 months) with a mean follow-up of 55 months. There were no cases of hemiglossal atrophy, paralysis, or subjective dysfunction. Direct facial-hypoglossal neurorrhaphy with parotid release achieved a functional reinnervation and good clinical outcome in the majority of patients, with minimal lingual morbidity. This technique is a viable option for facial reanimation and should be strongly considered as a surgical option for the paralyzed face. PMID:22451794

  9. Outcomes of Direct Facial-to-Hypoglossal Neurorrhaphy with Parotid Release.

    PubMed

    Jacobson, Joel; Rihani, Jordan; Lin, Karen; Miller, Phillip J; Roland, J Thomas

    2011-01-01

    Lesions of the temporal bone and cerebellopontine angle and their management can result in facial nerve paralysis. When the nerve deficit is not amenable to primary end-to-end repair or interpositional grafting, nerve transposition can be used to accomplish the goals of restoring facial tone, symmetry, and voluntary movement. The most widely used nerve transposition is the hypoglossal-facial nerve anastamosis, of which there are several technical variations. Previously we described a technique of single end-to-side anastamosis using intratemporal facial nerve mobilization and parotid release. This study further characterizes the results of this technique with a larger patient cohort and longer-term follow-up. The design of this study is a retrospective chart review and the setting is an academic tertiary care referral center. Twenty-one patients with facial nerve paralysis from proximal nerve injury at the cerebellopontine angle underwent facial-hypoglossal neurorraphy with parotid release. Outcomes were assessed using the Repaired Facial Nerve Recovery Scale, questionnaires, and patient photographs. Of the 21 patients, 18 were successfully reinnervated to a score of a B or C on the recovery scale, which equates to good oral and ocular sphincter closure with minimal mass movement. The mean duration of paralysis between injury and repair was 12.1 months (range 0 to 36 months) with a mean follow-up of 55 months. There were no cases of hemiglossal atrophy, paralysis, or subjective dysfunction. Direct facial-hypoglossal neurorrhaphy with parotid release achieved a functional reinnervation and good clinical outcome in the majority of patients, with minimal lingual morbidity. This technique is a viable option for facial reanimation and should be strongly considered as a surgical option for the paralyzed face.

  10. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  11. Representational momentum in dynamic facial expressions is modulated by the level of expressed pain: Amplitude and direction effects.

    PubMed

    Prigent, Elise; Amorim, Michel-Ange; de Oliveira, Armando Mónica

    2018-01-01

    Humans have developed a specific capacity to rapidly perceive and anticipate other people's facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people's levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an "immediate perceptual history" in the perceiver before leading to an emotional anticipation of the agent's upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process-one through which we can swiftly and involuntarily detect other people's pain.

  12. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  13. Facial expression system on video using widrow hoff

    NASA Astrophysics Data System (ADS)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  14. Individual and Maturational Differences in Infant Expressivity.

    ERIC Educational Resources Information Center

    Field, Tiffany

    1989-01-01

    Reports that, even though young infants can discriminate among different facial expressions, there are individual differences in infants' expressivity and ability to produce and discriminate facial expressions. (PCB)

  15. Utility of optical facial feature and arm movement tracking systems to enable text communication in critically ill patients who cannot otherwise communicate.

    PubMed

    Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J

    2014-09-01

    Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (P<0.001, 2-tailed independent sample t-test). Both devices may be potentially useful communication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  16. Measuring facial expression of emotion.

    PubMed

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  17. Role of electrical stimulation added to conventional therapy in patients with idiopathic facial (Bell) palsy.

    PubMed

    Tuncay, Figen; Borman, Pinar; Taşer, Burcu; Ünlü, İlhan; Samim, Erdal

    2015-03-01

    The aim of this study was to determine the efficacy of electrical stimulation when added to conventional physical therapy with regard to clinical and neurophysiologic changes in patients with Bell palsy. This was a randomized controlled trial. Sixty patients diagnosed with Bell palsy (39 right sided, 21 left sided) were included in the study. Patients were randomly divided into two therapy groups. Group 1 received physical therapy applying hot pack, facial expression exercises, and massage to the facial muscles, whereas group 2 received electrical stimulation treatment in addition to the physical therapy, 5 days per week for a period of 3 wks. Patients were evaluated clinically and electrophysiologically before treatment (at the fourth week of the palsy) and again 3 mos later. Outcome measures included the House-Brackmann scale and Facial Disability Index scores, as well as facial nerve latencies and amplitudes of compound muscle action potentials derived from the frontalis and orbicularis oris muscles. Twenty-nine men (48.3%) and 31 women (51.7%) with Bell palsy were included in the study. In group 1, 16 (57.1%) patients had no axonal degeneration and 12 (42.9%) had axonal degeneration, compared with 17 (53.1%) and 15 (46.9%) patients in group 2, respectively. The baseline House-Brackmann and Facial Disability Index scores were similar between the groups. At 3 mos after onset, the Facial Disability Index scores were improved similarly in both groups. The classification of patients according to House-Brackmann scale revealed greater improvement in group 2 than in group 1. The mean motor nerve latencies and compound muscle action potential amplitudes of both facial muscles were statistically shorter in group 2, whereas only the mean motor latency of the frontalis muscle decreased in group 1. The addition of 3 wks of daily electrical stimulation shortly after facial palsy onset (4 wks), improved functional facial movements and electrophysiologic outcome measures at the 3-mo follow-up in patients with Bell palsy. Further research focused on determining the most effective dosage and length of intervention with electrical stimulation is warranted.

  18. Decoding facial blends of emotion: visual field, attentional and hemispheric biases.

    PubMed

    Ross, Elliott D; Shayya, Luay; Champlain, Amanda; Monnot, Marilee; Prodan, Calin I

    2013-12-01

    Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced. Published by Elsevier Inc.

  19. The Influence of Personality Traits on Emotion Expression in Bulimic Spectrum Disorders: A Pilot Study.

    PubMed

    Giner-Bartolomé, Cristina; Steward, Trevor; Wolz, Ines; Jiménez-Murcia, Susana; Granero, Roser; Tárrega, Salomé; Fernández-Formoso, José Antonio; Soriano-Mas, Carles; Menchón, José M; Fernández-Aranda, Fernando

    2016-07-01

    Facial expressions are critical in forming social bonds and in signalling one's emotional state to others. In eating disorder patients, impairments in facial emotion recognition have been associated with eating psychopathology severity. Little research however has been carried out on how bulimic spectrum disorder (BSD) patients spontaneously express emotions. Our aim was to investigate emotion expression in BSD patients and to explore the influence of personality traits. Our study comprised 28 BSD women and 15 healthy controls. Facial expressions were recorded while participants played a serious video game. Expressions of anger and joy were used as outcome measures. Overall, BSD participants displayed less facial expressiveness than controls. Among BSD women, expressions of joy were positively associated with reward dependence, novelty seeking and self-directedness, whereas expressions of anger were associated with lower self-directedness. Our findings suggest that specific personality traits are associated with altered emotion facial expression in patients with BSD. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  20. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  1. Facial reanimation with masseteric nerve: babysitter or permanent procedure? Preliminary results.

    PubMed

    Faria, Jose Carlos Marques; Scopel, Gean Paulo; Ferreira, Marcus Castro

    2010-01-01

    The authors are presenting a series of 10 cases of complete unilateral facial paralysis submitted to (I) end-to-end microsurgical coaptation of the masseteric branch of the trigeminal nerve and distal branches of the paralyzed facial nerve, and (II) cross-face sural nerve graft. The ages of the patients ranged from 5 to 63 years (mean: 44.1 years), and 8 (80%) of the patients were females. The duration of paralysis was no longer than 18 months (mean: 9.7 months). Follow-up varied from 6 to 18 months (mean: 12.6 months). Initial voluntary facial movements were observed between 3 and 6 months postoperatively (mean: 4.3 months). All patients were able to produce the appearance of a smile when asked to clench their teeth. Comparing the definition of the nasolabial fold and the degree of movement of the modiolus on both sides of the face, the voluntary smile was considered symmetrical in 8 cases. Recovery of the capacity to blink spontaneously was not observed. However, 8 patients were able to reduce or suspend the application of artificial tears. The authors suggest consideration of masseteric-facial nerve coaptation, whether temporary (baby-sitter) or permanent, as the principal alternative for reconstruction of facial paralysis due to irreversible nerve lesion with less than 18 months of duration.

  2. Ectopic application of recombinant BMP-2 and BMP-4 can change patterning of developing chick facial primordia.

    PubMed

    Barlow, A J; Francis-West, P H

    1997-01-01

    The facial primordia initially consist of buds of undifferentiated mesenchyme, which give rise to a variety of tissues including cartilage, muscle and nerve. These must be arranged in a precise spatial order for correct function. The signals that control facial outgrowth and patterning are largely unknown. The bone morphogenetic proteins Bmp-2 and Bmp-4 are expressed in discrete regions at the distal tips of the early facial primordia suggesting possible roles for BMP-2 and BMP-4 during chick facial development. We show that expression of Bmp-4 and Bmp-2 is correlated with the expression of Msx-1 and Msx-2 and that ectopic application of BMP-2 and BMP-4 can activate Msx-1 and Msx-2 gene expression in the developing facial primordia. We correlate this activation of gene expression with changes in skeletal development. For example, activation of Msx-1 gene expression across the distal tip of the mandibular primordium is associated with an extension of Fgf-4 expression in the epithelium and bifurcation of Meckel's cartilage. In the maxillary primordium, extension of the normal domain of Msx-1 gene expression is correlated with extended epithelial expression of shh and bifurcation of the palatine bone. We also show that application of BMP-2 can increase cell proliferation of the mandibular primordia. Our data suggest that BMP-2 and BMP-4 are part of a signalling cascade that controls outgrowth and patterning of the facial primordia.

  3. [INVITED] Non-intrusive optical imaging of face to probe physiological traits in Autism Spectrum Disorder

    NASA Astrophysics Data System (ADS)

    Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.

    2016-03-01

    Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.

  4. Emotional Facial and Vocal Expressions during Story Retelling by Children and Adolescents with High-Functioning Autism

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen

    2013-01-01

    Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were 22…

  5. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  6. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    ERIC Educational Resources Information Center

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  7. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  8. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  9. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    ERIC Educational Resources Information Center

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  10. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  11. Towards a Transcription System of Sign Language for 3D Virtual Agents

    NASA Astrophysics Data System (ADS)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  12. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  13. Greater perceptual sensitivity to happy facial expression.

    PubMed

    Maher, Stephen; Ekstrom, Tor; Chen, Yue

    2014-01-01

    Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).

  14. [Emotional facial expression recognition impairment in Parkinson disease].

    PubMed

    Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre

    2006-03-01

    some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.

  15. Synthesis of Speaker Facial Movement to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.

    1994-01-01

    A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.

  16. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder.

    PubMed

    Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can

    2017-01-01

    Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated backward masking paradigm was used to investigate whether individuals with Internet Gaming Disorder (IGD) and normal controls (NC) exhibit different patterns in facial expression processing.• The results demonstrated that individuals with IGD respond differently to facial expressions compared with NC on a preattentive level. Behaviorally, individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results further showed (1) decreased amplitudes in the N170 component (an index of early face processing) in individuals with IGD when they process neutral expressions compared with happy expressions in the happy-neutral expressions context, whereas the NC exhibited comparable N170 amplitudes in response to these two expressions; (2) both the IGD and NC group demonstrated similar N170 amplitudes in response to sad and neutral faces in the sad-neutral expressions context.• The decreased amplitudes of N170 to neutral faces than happy faces in individuals with IGD might due to their less expectancies for neutral content in the happy-neutral expressions context, while individuals with IGD may have no different expectancies for neutral and sad faces in the sad-neutral expressions context.

  17. Management of synkinesis and asymmetry in facial nerve palsy: a review article.

    PubMed

    Pourmomeny, Abbas Ali; Asadi, Sahar

    2014-10-01

    The important sequelae of facial nerve palsy are synkinesis, asymmetry, hypertension and contracture; all of which have psychosocial effects on patients. Synkinesis due to mal regeneration causes involuntary movements during a voluntary movement. Previous studies have advocated treatment using physiotherapy modalities alone or with exercise therapy, but no consensus exists on the optimal approach. Thus, this review summarizes clinical controlled studies in the management of synkinesis and asymmetry in facial nerve palsy. Case-controlled clinical studies of patients at the acute stage of injury were selected for this review article. Data were obtained from English-language databases from 1980 until mid-2013. Among 124 articles initially captured, six randomized controlled trials involving 269 patients were identified with appropriate inclusion criteria. The results of all these studies emphasized the benefit of exercise therapy. Four studies considered electromyogram (EMG) biofeedback to be effective through neuromuscular re-education. Synkinesis and inconsistency of facial muscles could be treated with educational exercise therapy. EMG biofeedback is a suitable tool for this exercise therapy.

  18. Impaired recognition of happy facial expressions in bipolar disorder.

    PubMed

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  19. Organization of the central control of muscles of facial expression in man

    PubMed Central

    Root, A A; Stephens, J A

    2003-01-01

    Surface EMGs were recorded simultaneously from ipsilateral pairs of facial muscles while subjects made three different common facial expressions: the smile, a sad expression and an expression of horror, and three contrived facial expressions. Central peaks were found in the cross-correlograms of EMG activity recorded from the orbicularis oculi and zygomaticus major during smiling, the corrugator and depressor anguli oris during the sad look and the frontalis and mentalis during the horror look. The size of the central peak was significantly greater between the orbicularis oculi and zygomaticus major during smiling. It is concluded that co-contraction of facial muscles during some facial expressions are accompanied by the presence of common synaptic drive to the motoneurones supplying the muscles involved. Central peaks were found in the cross-correlograms of EMG activity recorded from the frontalis and depressor anguli oris during a contrived expression. However, no central peaks were found in the cross-correlograms of EMG activity recorded from the frontalis and orbicularis oculi or from the frontalis and zygomaticus major during the other two contrived expressions. It is concluded that a common synaptic drive is not present between all possible facial muscle pairs and suggests a functional role for the synergy. The origin of the common drive is discussed. It is concluded that activity in branches of common stem last-order presynaptic input fibres to motoneurones innervating the different facial muscles and presynaptic synchronization of input activity to the different motoneurone pools is involved. The former probably contributes more to the drive to the orbicularis oculi and zygomaticus major during smiling, while the latter is probably more prevalent in the corrugator and depressor anguli oris during the sad look, the frontalis and mentalis during the horror look and the frontalis and depressor anguli oris during one of the contrived expressions. The strength of common synaptic drive is inversely related to the degree of separate control that can be exhibited by the facial muscles involved. PMID:12692176

  20. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    PubMed

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  1. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  2. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  3. Unattractive infant faces elicit negative affect from adults.

    PubMed

    Schein, Stevie S; Langlois, Judith H

    2015-02-01

    We examined the relationship between infant attractiveness and adult affect by investigating whether differing levels of infant facial attractiveness elicit facial muscle movement correlated with positive and negative affect from adults (N=87) using electromyography. Unattractive infant faces evoked significantly more corrugator supercilii and levator labii superioris movement (physiological correlates of negative affect) than attractive infant faces. These results suggest that unattractive infants may be at risk for negative affective responses from adults, though the relationship between those responses and caregiving behavior remains elusive. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Violent media consumption and the recognition of dynamic facial expressions.

    PubMed

    Kirsh, Steven J; Mounts, Jeffrey R W; Olczak, Paul V

    2006-05-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Results indicated that, independent of trait aggressiveness, participants high in violent media consumption responded slower to depictions of happiness and faster to depictions of anger than participants low in violent media consumption. Implications of these findings are discussed with respect to current models of aggressive behavior.

  5. Contextual interference processing during fast categorisations of facial expressions.

    PubMed

    Frühholz, Sascha; Trautmann-Lengsfeld, Sina A; Herrmann, Manfred

    2011-09-01

    We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.

  6. Acute alcohol effects on facial expressions of emotions in social drinkers: a systematic review

    PubMed Central

    Capito, Eva Susanne; Lautenbacher, Stefan; Horn-Hofmann, Claudia

    2017-01-01

    Background As known from everyday experience and experimental research, alcohol modulates emotions. Particularly regarding social interaction, the effects of alcohol on the facial expression of emotion might be of relevance. However, these effects have not been systematically studied. We performed a systematic review on acute alcohol effects on social drinkers’ facial expressions of induced positive and negative emotions. Materials and methods With a predefined algorithm, we searched three electronic databases (PubMed, PsycInfo, and Web of Science) for studies conducted on social drinkers that used acute alcohol administration, emotion induction, and standardized methods to record facial expressions. We excluded those studies that failed common quality standards, and finally selected 13 investigations for this review. Results Overall, alcohol exerted effects on facial expressions of emotions in social drinkers. These effects were not generally disinhibiting, but varied depending on the valence of emotion and on social interaction. Being consumed within social groups, alcohol mostly influenced facial expressions of emotions in a socially desirable way, thus underscoring the view of alcohol as social lubricant. However, methodical differences regarding alcohol administration between the studies complicated comparability. Conclusion Our review highlighted the relevance of emotional valence and social-context factors for acute alcohol effects on social drinkers’ facial expressions of emotions. Future research should investigate how these alcohol effects influence the development of problematic drinking behavior in social drinkers. PMID:29255375

  7. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    PubMed Central

    2012-01-01

    Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD. PMID:22889284

  8. Facial expression drawings and the full cup test: valid tools for the measurement of swelling after dental surgery.

    PubMed

    Al-Samman, A A; Othman, H A

    2017-01-01

    Assessment of postoperative swelling is subjective and depends on the patient's opinion. The aim of this study was to evaluate the validity of facial expression drawings and the full cup test and to compare their performance with that of other scales in measuring postoperative swelling. Fifty patients who had one of several procedures were included. All patients were asked to fill in a form for six days postoperatively (including the day of operation) that contained four scales to rate the amount of swelling: facial expression drawings, the full cup test, the visual analogue scale (VAS), and the verbal rating scale (VRS). Seven patients did not know how to use some of the scales. However, all patients successfully used the facial expression drawings. There was a significant difference between the scale that patients found easiest to use and the others (p<0.008). Fourteen patients selected facial expression drawings and five the VRS. The results showed that the correlations between the scales were good (p<0.01). Facial expression drawings and the full cup test are valid tools and could be used to assess postoperative swelling interchangeably with other scales for rating swelling, and some patients found facial expression drawings were the easiest to use. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    PubMed

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  10. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  11. Impaired Perception of Emotional Expression in Amyotrophic Lateral Sclerosis.

    PubMed

    Oh, Seong Il; Oh, Ki Wook; Kim, Hee Jin; Park, Jin Seok; Kim, Seung Hyun

    2016-07-01

    The increasing recognition that deficits in social emotions occur in amyotrophic lateral sclerosis (ALS) is helping to explain the spectrum of neuropsychological dysfunctions, thus supporting the view of ALS as a multisystem disorder involving neuropsychological deficits as well as motor deficits. The aim of this study was to characterize the emotion perception abilities of Korean patients with ALS based on the recognition of facial expressions. Twenty-four patients with ALS and 24 age- and sex-matched healthy controls completed neuropsychological tests and facial emotion recognition tasks [ChaeLee Korean Facial Expressions of Emotions (ChaeLee-E)]. The ChaeLee-E test includes facial expressions for seven emotions: happiness, sadness, anger, disgust, fear, surprise, and neutral. The ability to perceive facial emotions was significantly worse among ALS patients performed than among healthy controls [65.2±18.0% vs. 77.1±6.6% (mean±SD), p=0.009]. Eight of the 24 patients (33%) scored below the 5th percentile score of controls for recognizing facial emotions. Emotion perception deficits occur in Korean ALS patients, particularly regarding facial expressions of emotion. These findings expand the spectrum of cognitive and behavioral dysfunction associated with ALS into emotion processing dysfunction.

  12. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns

    PubMed Central

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi

    2016-01-01

    Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates. PMID:26761433

  14. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  15. The Relationship between Processing Facial Identity and Emotional Expression in 8-Month-Old Infants

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Jovanovic, Bianca

    2010-01-01

    In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight-month-old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a…

  16. Skilful communication: Emotional facial expressions recognition in very old adults.

    PubMed

    María Sarabia-Cobo, Carmen; Navas, María José; Ellgring, Heiner; García-Rodríguez, Beatriz

    2016-02-01

    The main objective of this study was to assess the changes associated with ageing in the ability to identify emotional facial expressions and to what extent such age-related changes depend on the intensity with which each basic emotion is manifested. A randomised controlled trial carried out on 107 subjects who performed a six alternative forced-choice emotional expressions identification task. The stimuli consisted of 270 virtual emotional faces expressing the six basic emotions (happiness, sadness, surprise, fear, anger and disgust) at three different levels of intensity (low, pronounced and maximum). The virtual faces were generated by facial surface changes, as described in the Facial Action Coding System (FACS). A progressive age-related decline in the ability to identify emotional facial expressions was detected. The ability to recognise the intensity of expressions was one of the most strongly impaired variables associated with age, although the valence of emotion was also poorly identified, particularly in terms of recognising negative emotions. Nurses should be mindful of how ageing affects communication with older patients. In this study, very old adults displayed more difficulties in identifying emotional facial expressions, especially low intensity expressions and those associated with difficult emotions like disgust or fear. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Face-selective regions differ in their ability to classify facial expressions

    PubMed Central

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-01-01

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: The amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  18. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  19. French-speaking children’s freely produced labels for facial expressions

    PubMed Central

    Maassarani, Reem; Gosselin, Pierre; Montembeault, Patricia; Gagnon, Mathieu

    2014-01-01

    In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. 5- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003) and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell (2003) is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children. PMID:24926281

  20. Facial reactions to violent and comedy films: Association with callous-unemotional traits and impulsive aggression.

    PubMed

    Fanti, Kostas A; Kyranides, Melina Nicole; Panayiotou, Georgia

    2017-02-01

    The current study adds to prior research by investigating specific (happiness, sadness, surprise, disgust, anger and fear) and general (corrugator and zygomatic muscle activity) facial reactions to violent and comedy films among individuals with varying levels of callous-unemotional (CU) traits and impulsive aggression (IA). Participants at differential risk of CU traits and IA were selected from a sample of 1225 young adults. In Experiment 1, participants (N = 82) facial expressions were recorded while they watched violent and comedy films. Video footage of participants' facial expressions was analysed using FaceReader, a facial coding software that classifies facial reactions. Findings suggested that individuals with elevated CU traits showed reduced facial reactions of sadness and disgust to violent films, indicating low empathic concern in response to victims' distress. In contrast, impulsive aggressors produced specifically more angry facial expressions when viewing violent and comedy films. In Experiment 2 (N = 86), facial reactions were measured by monitoring facial electromyography activity. FaceReader findings were verified by the reduced facial electromyography at the corrugator, but not the zygomatic, muscle in response to violent films shown by individuals high in CU traits. Additional analysis suggested that sympathy to victims explained the association between CU traits and reduced facial reactions to violent films.

  1. Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics

    PubMed Central

    Volhard, Jakob; Müller, Viktor; Kaulard, Kathrin; Brick, Timothy R.; Wallraven, Christian; Lindenberger, Ulman

    2017-01-01

    Research on the perception of facial emotional expressions (FEEs) often uses static images that do not capture the dynamic character of social coordination in natural settings. Recent behavioral and neuroimaging studies suggest that dynamic FEEs (videos or morphs) enhance emotion perception. To identify mechanisms associated with the perception of FEEs with natural dynamics, the present EEG (Electroencephalography)study compared (i) ecologically valid stimuli of angry and happy FEEs with natural dynamics to (ii) FEEs with unnatural dynamics, and to (iii) static FEEs. FEEs with unnatural dynamics showed faces moving in a biologically possible but unpredictable and atypical manner, generally resulting in ambivalent emotional content. Participants were asked to explicitly recognize FEEs. Using whole power (WP) and phase synchrony (Phase Locking Index, PLI), we found that brain responses discriminated between natural and unnatural FEEs (both static and dynamic). Differences were primarily observed in the timing and brain topographies of delta and theta PLI and WP, and in alpha and beta WP. Our results support the view that biologically plausible, albeit atypical, FEEs are processed by the brain by different mechanisms than natural FEEs. We conclude that natural movement dynamics are essential for the perception of FEEs and the associated brain processes. PMID:28723957

  2. Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics.

    PubMed

    Perdikis, Dionysios; Volhard, Jakob; Müller, Viktor; Kaulard, Kathrin; Brick, Timothy R; Wallraven, Christian; Lindenberger, Ulman

    2017-01-01

    Research on the perception of facial emotional expressions (FEEs) often uses static images that do not capture the dynamic character of social coordination in natural settings. Recent behavioral and neuroimaging studies suggest that dynamic FEEs (videos or morphs) enhance emotion perception. To identify mechanisms associated with the perception of FEEs with natural dynamics, the present EEG (Electroencephalography)study compared (i) ecologically valid stimuli of angry and happy FEEs with natural dynamics to (ii) FEEs with unnatural dynamics, and to (iii) static FEEs. FEEs with unnatural dynamics showed faces moving in a biologically possible but unpredictable and atypical manner, generally resulting in ambivalent emotional content. Participants were asked to explicitly recognize FEEs. Using whole power (WP) and phase synchrony (Phase Locking Index, PLI), we found that brain responses discriminated between natural and unnatural FEEs (both static and dynamic). Differences were primarily observed in the timing and brain topographies of delta and theta PLI and WP, and in alpha and beta WP. Our results support the view that biologically plausible, albeit atypical, FEEs are processed by the brain by different mechanisms than natural FEEs. We conclude that natural movement dynamics are essential for the perception of FEEs and the associated brain processes.

  3. A dynamic appearance descriptor approach to facial actions temporal modeling.

    PubMed

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  4. Functionally dissociated aspects in anterior and posterior electrocortical processing of facial threat.

    PubMed

    Schutter, Dennis J L G; de Haan, Edward H F; van Honk, Jack

    2004-06-01

    The angry facial expression is an important socially threatening stimulus argued to have evolved to regulate social hierarchies. In the present study, event-related potentials (ERP) were used to investigate the involvement and temporal dynamics of the frontal and parietal regions in the processing of angry facial expressions. Angry, happy and neutral faces were shown to eighteen healthy right-handed volunteers in a passive viewing task. Stimulus-locked ERPs were recorded from the frontal and parietal scalp sites. The P200, N300 and early contingent negativity variation (eCNV) components of the electric brain potentials were investigated. Analyses revealed statistical significant reductions in P200 amplitudes for the angry facial expression on both frontal and parietal electrode sites. Furthermore, apart from being strongly associated with the anterior P200, the N300 showed to be more negative for the angry facial expression in the anterior regions also. Finally, the eCNV was more pronounced over the parietal sites for the angry facial expressions. The present study demonstrated specific electrocortical correlates underlying the processing of angry facial expressions in the anterior and posterior brain sectors. The P200 is argued to indicate valence tagging by a fast and early detection mechanism. The lowered N300 with an anterior distribution for the angry facial expressions indicates more elaborate evaluation of stimulus relevance. The fact that the P200 and the N300 are highly correlated suggests that they reflect different stages of the same anterior evaluation mechanism. The more pronounced posterior eCNV suggests sustained attention to socially threatening information. Copyright 2004 Elsevier B.V.

  5. Speech-like orofacial oscillations in stump-tailed macaque (Macaca arctoides) facial and vocal signals.

    PubMed

    Toyoda, Aru; Maruhashi, Tamaki; Malaivijitnond, Suchinda; Koda, Hiroki

    2017-10-01

    Speech is unique to humans and characterized by facial actions of ∼5 Hz oscillations of lip, mouth or jaw movements. Lip-smacking, a facial display of primates characterized by oscillatory actions involving the vertical opening and closing of the jaw and lips, exhibits stable 5-Hz oscillation patterns, matching that of speech, suggesting that lip-smacking is a precursor of speech. We tested if facial or vocal actions exhibiting the same rate of oscillation are found in wide forms of facial or vocal displays in various social contexts, exhibiting diversity among species. We observed facial and vocal actions of wild stump-tailed macaques (Macaca arctoides), and selected video clips including facial displays (teeth chattering; TC), panting calls, and feeding. Ten open-to-open mouth durations during TC and feeding and five amplitude peak-to-peak durations in panting were analyzed. Facial display (TC) and vocalization (panting) oscillated within 5.74 ± 1.19 and 6.71 ± 2.91 Hz, respectively, similar to the reported lip-smacking of long-tailed macaques and the speech of humans. These results indicated a common mechanism for the central pattern generator underlying orofacial movements, which would evolve to speech. Similar oscillations in panting, which evolved from different muscular control than the orofacial action, suggested the sensory foundations for perceptual saliency particular to 5-Hz rhythms in macaques. This supports the pre-adaptation hypothesis of speech evolution, which states a central pattern generator for 5-Hz facial oscillation and perceptual background tuned to 5-Hz actions existed in common ancestors of macaques and humans, before the emergence of speech. © 2017 Wiley Periodicals, Inc.

  6. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    PubMed Central

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  7. Effects of dynamic information in recognising facial expressions on dimensional and categorical judgments.

    PubMed

    Fujimura, Tomomi; Suzuki, Naoto

    2010-01-01

    We investigated the effects of dynamic information on decoding facial expressions. A dynamic face entailed a change from a neutral to a full-blown expression, whereas a static face included only the full-blown expression. Sixty-eight participants were divided into two groups, the dynamic condition and the static condition. The facial stimuli expressed eight kinds of emotions (excited, happy, calm, sleepy, sad, angry, fearful, and surprised) according to a dimensional perspective. Participants evaluated each facial stimulus using two methods, the Affect Grid (Russell et al, 1989 Personality and Social Psychology 29 497-510) and the forced-choice task, allowing for dimensional and categorical judgment interpretations. For activation ratings in dimensional judgments, the results indicated that dynamic calm faces, low-activation expressions were rated as less activated than static faces. For categorical judgments, dynamic excited, happy, and fearful faces, which are high- and middle-activation expressions, had higher ratings than did those under the static condition. These results suggest that the beneficial effect of dynamic information depends on the emotional properties of facial expressions.

  8. Patterns of Eye Movements When Observers Judge Female Facial Attractiveness

    PubMed Central

    Zhang, Yan; Wang, Xiaoying; Wang, Juan; Zhang, Lili; Xiang, Yu

    2017-01-01

    The purpose of the present study is to explore the fixed model for the explicit judgments of attractiveness and infer which features are important to judge the facial attractiveness. Behavioral studies on the perceptual cues for female facial attractiveness implied three potentially important features: averageness, symmetry, and sexual dimorphy. However, these studies did not explained which regions of facial images influence the judgments of attractiveness. Therefore, the present research recorded the eye movements of 24 male participants and 19 female participants as they rated a series of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking. PMID:29209242

  9. Patterns of Eye Movements When Observers Judge Female Facial Attractiveness.

    PubMed

    Zhang, Yan; Wang, Xiaoying; Wang, Juan; Zhang, Lili; Xiang, Yu

    2017-01-01

    The purpose of the present study is to explore the fixed model for the explicit judgments of attractiveness and infer which features are important to judge the facial attractiveness. Behavioral studies on the perceptual cues for female facial attractiveness implied three potentially important features: averageness, symmetry, and sexual dimorphy. However, these studies did not explained which regions of facial images influence the judgments of attractiveness. Therefore, the present research recorded the eye movements of 24 male participants and 19 female participants as they rated a series of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking.

  10. Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.

    PubMed

    Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat

    2016-12-01

    Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.

  11. [Apoptosis and expression of apoptosis-related proteins in experimental different denervated guinea-pig facial muscle].

    PubMed

    Hui, Lian; Wei, Hong-Quan; Li, Xiao-Tian; Guan, Chao; Ren, Zhong

    2005-02-01

    To study apoptosis and expression of apoptosis-related proteins in experimental different denervated guinea-pig facial muscle. An experimental model was established with guinea pigs by compressing the facial nerve 30 second (reinnervated group) and resecting the facial nerve (denervated group). TUNEL method and immunohistochemical technique (SABC) were applied to detect the apoptosis and expression of apoptosis-related proteins bcl-2 and bax from 1st to 8th week after operation. Experimentally denervated facial muscle revealed consistently increase of DNA fragmentation, average from(34.4 +/- 4.6)% to (38.2 +/- 10.6)%, from 1st week to 8th week after operation; Reinnervated facial muscle showed a temporal increase of DNA fragmentation, and then the muscle fiber nuclei revealed decreased DNA fragmentation along with the function of facial nerve recovered, latterly normal, average from (32.0 +/- 8.03)% to (5.6 +/- 3.5)%, from 1st week to 8th week after operation. In denervated group, bcl-2 and bax were expressed strongly; in reinnervated group, bcl-2 expressed consistently, but bax disappeared latterly along with the function of facial nerve recovered. Expression of DNA fragmentation and apoptosis-related proteins in denervated muscle are general reaction to denervation. bcl-2 can prevent early apoptotic muscle fiber to survival until reinnervation. It is concluded that proteins control apoptosis may give information for possible therapeutic interventions to reduce the rate of muscle fiber death in denervated atrophy in absence of effective primary treatment.

  12. Parental catastrophizing about children's pain and selective attention to varying levels of facial expression of pain in children: a dot-probe study.

    PubMed

    Vervoort, Tine; Caes, Line; Crombez, Geert; Koster, Ernst; Van Damme, Stefaan; Dewitte, Marieke; Goubert, Liesbet

    2011-08-01

    The attentional demand of pain has primarily been investigated within an intrapersonal context. Little is known about observers' attentional processing of another's pain. The present study investigated, within a sample of parents (n=65; 51 mothers, 14 fathers) of school children, parental selective attention to children's facial display of pain and the moderating role of child's facial expressiveness of pain and parental catastrophizing about their child's pain. Parents performed a dot-probe task in which child facial display of pain (of varying pain expressiveness) were presented. Findings provided evidence of parental selective attention to child pain displays. Low facial displays of pain appeared sufficiently and also, as compared with higher facial displays of pain, equally capable of engaging parents' attention to the location of threat. Severity of facial displays of pain had a nonspatial effect on attention; that is, there was increased interference (ie, delayed responding) with increasing facial expressiveness. This interference effect was particularly pronounced for high-catastrophizing parents, suggesting that being confronted with increasing child pain displays becomes particularly demanding for high-catastrophizing parents. Finally, parents with higher levels of catastrophizing increasingly attended away from low pain expressions, whereas selective attention to high-pain expressions did not differ between high-catastrophizing and low-catastrophizing parents. Theoretical implications and further research directions are discussed. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  13. Brain Responses to Dynamic Facial Expressions: A Normative Meta-Analysis.

    PubMed

    Zinchenko, Oksana; Yaple, Zachary A; Arsalidou, Marie

    2018-01-01

    Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI) studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos) are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.

  14. The Automaticity of Emotional Face-Context Integration

    PubMed Central

    Aviezer, Hillel; Dudarev, Veronica; Bentin, Shlomo; Hassin, Ran R.

    2011-01-01

    Recent studies have demonstrated that context can dramatically influence the recognition of basic facial expressions, yet the nature of this phenomenon is largely unknown. In the present paper we begin to characterize the underlying process of face-context integration. Specifically, we examine whether it is a relatively controlled or automatic process. In Experiment 1 participants were motivated and instructed to avoid using the context while categorizing contextualized facial expression, or they were led to believe that the context was irrelevant. Nevertheless, they were unable to disregard the context, which exerted a strong effect on their emotion recognition. In Experiment 2, participants categorized contextualized facial expressions while engaged in a concurrent working memory task. Despite the load, the context exerted a strong influence on their recognition of facial expressions. These results suggest that facial expressions and their body contexts are integrated in an unintentional, uncontrollable, and relatively effortless manner. PMID:21707150

  15. Emotion elicitor or emotion messenger? Subliminal priming reveals two faces of facial expressions.

    PubMed

    Ruys, Kirsten I; Stapel, Diederik A

    2008-06-01

    Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.

  16. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  17. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion.

    PubMed

    Guo, Kun; Soornack, Yoshi; Settle, Rebecca

    2018-03-05

    Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Face-body integration of intense emotional expressions of victory and defeat.

    PubMed

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory.

  20. Face-body integration of intense emotional expressions of victory and defeat

    PubMed Central

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory. PMID:28245245

Top