Science.gov

Sample records for 3d facial expression

  1. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image. PMID:24434222

  2. FaceWarehouse: A 3D Facial Expression Database for Visual Computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2013-10-25

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Microsoft's Kinect system to capture 150 individuals from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions. For every raw data record, a set of facial feature points on the color image such as eye corners and mouth contour are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-three tensor to build a bilinear face model with two attributes, identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image. PMID:24166613

  3. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  4. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research. PMID:26761193

  5. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  6. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. PMID:25872024

  7. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  8. Improving Social Understanding of Individuals of Intellectual and Developmental disabilities through a 3D-Facial Expression Intervention Program

    ERIC Educational Resources Information Center

    Cheng, Yufang; Chen, Shuhui

    2010-01-01

    Individuals with intellectual and developmental disabilities (IDD) have specific difficulties in cognitive social-emotional capability, which affect numerous aspects of social competence. This study evaluated the learning effects of using 3D-emotion system intervention program for individuals with IDD in learning socially based-emotions capability…

  9. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  10. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  11. Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

    ERIC Educational Resources Information Center

    Wang, Yushun; Zhuang, Yueting

    2008-01-01

    Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…

  12. Holistic facial expression classification

    NASA Astrophysics Data System (ADS)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  13. Large-scale objective phenotyping of 3D facial morphology

    PubMed Central

    Hammond, Peter; Suttie, Michael

    2012-01-01

    Abnormal phenotypes have played significant roles in the discovery of gene function, but organized collection of phenotype data has been overshadowed by developments in sequencing technology. In order to study phenotypes systematically, large-scale projects with standardized objective assessment across populations are considered necessary. The report of the 2006 Human Variome Project meeting recommended documentation of phenotypes through electronic means by collaborative groups of computational scientists and clinicians using standard, structured descriptions of disease-specific phenotypes. In this report, we describe progress over the past decade in 3D digital imaging and shape analysis of the face, and future prospects for large-scale facial phenotyping. Illustrative examples are given throughout using a collection of 1107 3D face images of healthy controls and individuals with a range of genetic conditions involving facial dysmorphism. PMID:22434506

  14. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  15. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  16. Analyzing the relevance of shape descriptors in automated recognition of facial gestures in 3D images

    NASA Astrophysics Data System (ADS)

    Rodriguez A., Julian S.; Prieto, Flavio

    2013-03-01

    The present document shows and explains the results from analyzing shape descriptors (DESIRE and Spherical Spin Image) for facial recognition of 3D images. DESIRE is a descriptor made of depth images, silhouettes and rays extended from a polygonal mesh; whereas the Spherical Spin Image (SSI) associated to a polygonal mesh point, is a 2D histogram built from neighboring points by using the position information that captures features of the local shape. The database used contains images of facial expressions which in average were recognized 88.16% using a neuronal network and 91.11% with a Bayesian classifier in the case of the first descriptor; in contrast, the second descriptor only recognizes in average 32% and 23,6% using the same mentioned classifiers respectively.

  17. Multi-curve spectrum representation of facial movements and expressions

    NASA Astrophysics Data System (ADS)

    Pei, Li; Zhang, Zhijiang; Chen, Zhixiang; Zeng, Dan

    2009-07-01

    This paper presents a method of multi-curve spectrum representation of facial movements and expressions. Based on 3DMCF (3D muscle-controlled facial) model, facial movements and expressions are controlled by 21 virtual muscles. So, facial movements and expressions can be described by a group of time-varying curves of normalized muscle contraction, called multi-curve spectrum. The structure and basic characters of multi-curve spectrum is introduced. The performance of the proposed method is among the best. This method needs small quantity of data, and is easy to apply. It can also be used to transplant facial animation between different faces.

  18. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes. PMID:23868784

  19. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing. PMID:25088006

  20. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  1. Landmark detection from 3D mesh facial models for image-based analysis of dysmorphology.

    PubMed

    Chendeb, Marwa; Tortorici, Claudio; Al Muhairi, Hassan; Al Safar, Habiba; Linguraru, Marius; Werghi, Naoufel

    2015-01-01

    Facial landmark detection is a task of interest for facial dysmorphology, an important factor in the diagnosis of genetic conditions. In this paper, we propose a framework for feature points detection from 3D face images. The method is based on 3D Constrained Local Model (CLM) which learns both global variations in the 3D facial scan and local changes around every vertex landmark. Compared to state of the art methods our framework is distinguished by the following novel aspects: 1) It operates on facial surfaces, 2) It allows fusion of shape and color information on the mesh surface, 3) It introduces the use of LBP descriptors on the mesh. We showcase our landmarks detection framework on a set of scans including down syndrome and control cases. We also validate our method through a series of quantitative experiments conducted with the publicly available Bosphorus database. PMID:26736227

  2. Measuring facial expression of emotion

    PubMed Central

    Wolf, Karsten

    2015-01-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress. PMID:26869846

  3. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. PMID:25620090

  4. Improved facial outcome assessment using a 3D anthropometric mask.

    PubMed

    Claes, P; Walters, M; Clement, J

    2012-03-01

    The capacity to process three-dimensional facial surfaces to objectively assess outcomes of craniomaxillofacial care is urgently required. Available surface registration techniques depart from conventional facial anthropometrics by not including anatomical relationship in their analysis. Current registrations rely on the manual selection of areas or points that have not moved during surgery, introducing subjectivity. An improved technique is proposed based on the concept of an anthropometric mask (AM) combined with robust superimposition. The AM is the equivalent to landmark definitions, as used in traditional anthropometrics, but described in a spatially dense way using (∼10.000) quasi-landmarks. A robust superimposition is performed to align surface images facilitating accurate measurement of spatial differences between corresponding quasi-landmarks. The assessment describes magnitude and direction of change objectively and can be displayed graphically. The technique was applied to three patients, without any modification and prior knowledge: a 4-year-old boy with Treacher-Collins syndrome in a resting and smiling pose; surgical correction for hemimandibular hypoplasia; and mandibular hypoplasia with staged orthognathic procedures. Comparisons were made with a reported closest-point (CP) strategy. Contrasting outcomes were found where the CP strategy resulted in anatomical implausibility whilst the AM technique was parsimonious to expected differences. PMID:22103995

  5. Learning deformation model for expression-robust 3D face recognition

    NASA Astrophysics Data System (ADS)

    Guo, Zhe; Liu, Shu; Wang, Yi; Lei, Tao

    2015-12-01

    Expression change is the major cause of local plastic deformation of the facial surface. The intra-class differences with large expression change somehow are larger than the inter-class differences as it's difficult to distinguish the same individual with facial expression change. In this paper, an expression-robust 3D face recognition method is proposed by learning expression deformation model. The expression of the individuals on the training set is modeled by principal component analysis, the main components are retained to construct the facial deformation model. For the test 3D face, the shape difference between the test and the neutral face in training set is used for reconstructing the expression change by the constructed deformation model. The reconstruction residual error is used for face recognition. The average recognition rate on GavabDB and self-built database reaches 85.1% and 83%, respectively, which shows strong robustness for expression changes.

  6. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions. PMID:21622076

  7. Digital 3D facial reconstruction of George Washington

    NASA Astrophysics Data System (ADS)

    Razdan, Anshuman; Schwartz, Jeff; Tocheri, Mathew; Hansford, Dianne

    2006-02-01

    PRISM is a focal point of interdisciplinary research in geometric modeling, computer graphics and visualization at Arizona State University. Many projects in the last ten years have involved laser scanning, geometric modeling and feature extraction from such data as archaeological vessels, bones, human faces, etc. This paper gives a brief overview of a recently completed project on the 3D reconstruction of George Washington (GW). The project brought together forensic anthropologists, digital artists and computer scientists in the 3D digital reconstruction of GW at 57, 45 and 19 including detailed heads and bodies. Although many other scanning projects such as the Michelangelo project have successfully captured fine details via laser scanning, our project took it a step further, i.e. to predict what that individual (in the sculpture) might have looked like both in later and earlier years, specifically the process to account for reverse aging. Our base data was GWs face mask at Morgan Library and Hudons bust of GW at Mount Vernon, both done when GW was 53. Additionally, we scanned the statue at the Capitol in Richmond, VA; various dentures, and other items. Other measurements came from clothing and even portraits of GW. The digital GWs were then milled in high density foam for a studio to complete the work. These will be unveiled at the opening of the new education center at Mt Vernon in fall 2006.

  8. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  9. Cortical control of facial expression.

    PubMed

    Müri, René M

    2016-06-01

    The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system. PMID:26418049

  10. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice. PMID:22004854

  11. Compound facial expressions of emotion.

    PubMed

    Du, Shichuan; Tao, Yong; Martinez, Aleix M

    2014-04-15

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another. PMID:24706770

  12. Compound facial expressions of emotion

    PubMed Central

    Du, Shichuan; Tao, Yong; Martinez, Aleix M.

    2014-01-01

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories—happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another. PMID:24706770

  13. Simultaneous facial feature tracking and facial expression recognition.

    PubMed

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities. PMID:23529088

  14. Robust and regional 3D facial asymmetry assessment in hemimandibular hyperplasia and hemimandibular elongation anomalies.

    PubMed

    Walters, M; Claes, P; Kakulas, E; Clement, J G

    2013-01-01

    Hemimandibular hyperplasia (HH) and hemimandibular elongation (HE) anomalies present with facial asymmetry and deranged occlusion. Currently, diagnosis and assessment of the facial dysmorphology is based on subjective clinical evaluation, supported by radiological scans. Advancements in objective assessments of facial asymmetry from three-dimensional (3D) facial scans facilitate a re-evaluation of the patterns of facial dysmorphology. Automated, robust and localised asymmetry assessments were obtained by comparing a 3D facial scan with its reflected image using a weighted least-squares superimposition. This robust superimposition is insensitive to severe asymmetries. This provides an estimation of the anatomical midline and a spatially dense vector map visualising localised directional differences between the left and right hemifaces. Analysis was conducted on three condylar hyperplasia phenotypes confirmed by clinical and CT evaluation: HH; HE; and hybrid phenotype. The midline extraction revealed chin point displacements in all cases. The upper lip philtrum and nose tip deviation to the affected side and a marked asymmetry of the mid face was noted in cases involving HE. Downward and medial rotation of the mandible with minor involvement of the midface was seen in the HH associated deformity. The hybrid phenotype case exhibited asymmetry features of both HH and HE cases. PMID:22749574

  15. Observer success rates for identification of 3D surface reconstructed facial images and implications for patient privacy and security

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Siddiqui, Khan M.; Fort, Leslie; Moffitt, Ryan; Juluru, Krishna; Kim, Woojin; Safdar, Nabile; Siegel, Eliot L.

    2007-03-01

    3D and multi-planar reconstruction of CT images have become indispensable in the routine practice of diagnostic imaging. These tools cannot only enhance our ability to diagnose diseases, but can also assist in therapeutic planning as well. The technology utilized to create these can also render surface reconstructions, which may have the undesired potential of providing sufficient detail to allow recognition of facial features and consequently patient identity, leading to violation of patient privacy rights as described in the HIPAA (Health Insurance Portability and Accountability Act) legislation. The purpose of this study is to evaluate whether 3D reconstructed images of a patient's facial features can indeed be used to reliably or confidently identify that specific patient. Surface reconstructed images of the study participants were created used as candidates for matching with digital photographs of participants. Data analysis was performed to determine the ability of observers to successfully match 3D surface reconstructed images of the face with facial photographs. The amount of time required to perform the match was recorded as well. We also plan to investigate the ability of digital masks or physical drapes to conceal patient identity. The recently expressed concerns over the inability to truly "anonymize" CT (and MRI) studies of the head/face/brain are yet to be tested in a prospective study. We believe that it is important to establish whether these reconstructed images are a "threat" to patient privacy/security and if so, whether minimal interventions from a clinical perspective can substantially reduce this possibility.

  16. Mapping and Manipulating Facial Expression

    ERIC Educational Resources Information Center

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  17. Analysis of Facial Expression by Taste Stimulation

    NASA Astrophysics Data System (ADS)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  18. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  19. A 2D range Hausdorff approach to 3D facial recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  20. Facial Expressions, Emotions, and Sign Languages

    PubMed Central

    Elliott, Eeva A.; Jacobs, Arthur M.

    2013-01-01

    Facial expressions are used by humans to convey various types of meaning in various contexts. The range of meanings spans basic possibly innate socio-emotional concepts such as “surprise” to complex and culture specific concepts such as “carelessly.” The range of contexts in which humans use facial expressions spans responses to events in the environment to particular linguistic constructions within sign languages. In this mini review we summarize findings on the use and acquisition of facial expressions by signers and present a unified account of the range of facial expressions used by referring to three dimensions on which facial expressions vary: semantic, compositional, and iconic. PMID:23482994

  1. An Automatic 3D Facial Landmarking Algorithm Using 2D Gabor Wavelets.

    PubMed

    de Jong, Markus A; Wollstein, Andreas; Ruff, Clifford; Dunaway, David; Hysi, Pirro; Spector, Tim; Fan Liu; Niessen, Wiro; Koudstaal, Maarten J; Kayser, Manfred; Wolvius, Eppo B; Bohringer, Stefan

    2016-02-01

    In this paper, we present a novel approach to automatic 3D facial landmarking using 2D Gabor wavelets. Our algorithm considers the face to be a surface and uses map projections to derive 2D features from raw data. Extracted features include texture, relief map, and transformations thereof. We extend an established 2D landmarking method for simultaneous evaluation of these data. The method is validated by performing landmarking experiments on two data sets using 21 landmarks and compared with an active shape model implementation. On average, landmarking error for our method was 1.9 mm, whereas the active shape model resulted in an average landmarking error of 2.3 mm. A second study investigating facial shape heritability in related individuals concludes that automatic landmarking is on par with manual landmarking for some landmarks. Our algorithm can be trained in 30 min to automatically landmark 3D facial data sets of any size, and allows for fast and robust landmarking of 3D faces. PMID:26540684

  2. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  3. Neuroticism Delays Detection of Facial Expressions

    PubMed Central

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions. PMID:27073904

  4. Man-machine collaboration using facial expressions

    NASA Astrophysics Data System (ADS)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  5. Mapping and Manipulating Facial Expression

    PubMed Central

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Non-verbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this paper we describe techniques for manipulating both verbal and non-verbal facial gestures in video sequences of people engaged in conversation. We are developing a system for use in psychological experiments, where the effects of manipulating individual components of non-verbal visual behaviour during live face-to-face conversation can be studied. In particular, the techniques we describe operate in real-time at video frame-rate and the manipulation can be applied so both participants in a conversation are kept blind to the experimental conditions. PMID:19624037

  6. Fast and Accurate Digital Morphometry of Facial Expressions.

    PubMed

    Grewe, Carl Martin; Schreiber, Lisa; Zachow, Stefan

    2015-10-01

    Facial surgery deals with a part of the human body that is of particular importance in everyday social interactions. The perception of a person's natural, emotional, and social appearance is significantly influenced by one's expression. This is why facial dynamics has been increasingly studied by both artists and scholars since the mid-Renaissance. Currently, facial dynamics and their importance in the perception of a patient's identity play a fundamental role in planning facial surgery. Assistance is needed for patient information and communication, and documentation and evaluation of the treatment as well as during the surgical procedure. Here, the quantitative assessment of morphological features has been facilitated by the emergence of diverse digital imaging modalities in the last decades. Unfortunately, the manual data preparation usually needed for further quantitative analysis of the digitized head models (surface registration, landmark annotation) is time-consuming, and thus inhibits its use for treatment planning and communication. In this article, we refer to historical studies on facial dynamics, briefly present related work from the field of facial surgery, and draw implications for further developments in this context. A prototypical stereophotogrammetric system for high-quality assessment of patient-specific 3D dynamic morphology is described. An individual statistical model of several facial expressions is computed, and possibilities to address a broad range of clinical questions in facial surgery are demonstrated. PMID:26579859

  7. Averaging facial expression over time

    PubMed Central

    Haberman, Jason; Harp, Tom; Whitney, David

    2010-01-01

    The visual system groups similar features, objects, and motion (e.g., Gestalt grouping). Recent work suggests that the computation underlying perceptual grouping may be one of summary statistical representation. Summary representation occurs for low-level features, such as size, motion, and position, and even for high level stimuli, including faces; for example, observers accurately perceive the average expression in a group of faces (J. Haberman & D. Whitney, 2007, 2009). The purpose of the present experiments was to characterize the time-course of this facial integration mechanism. In a series of three experiments, we measured observers’ abilities to recognize the average expression of a temporal sequence of distinct faces. Faces were presented in sets of 4, 12, or 20, at temporal frequencies ranging from 1.6 to 21.3 Hz. The results revealed that observers perceived the average expression in a temporal sequence of different faces as precisely as they perceived a single face presented repeatedly. The facial averaging was independent of temporal frequency or set size, but depended on the total duration of exposed faces, with a time constant of ~800 ms. These experiments provide evidence that the visual system is sensitive to the ensemble characteristics of complex objects presented over time. PMID:20053064

  8. Social Use of Facial Expressions in Hylobatids.

    PubMed

    Scheider, Linda; Waller, Bridget M; Oña, Leonardo; Burrows, Anne M; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely 'responded to' by the partner's facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  9. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  10. Facial expression recognition on a people-dependent personal facial expression space (PFES)

    NASA Astrophysics Data System (ADS)

    Chandrasiri, N. P.; Park, Min Chul; Naemura, Takeshi; Harashima, Hiroshi

    2000-04-01

    In this paper, a person-specific facial expression recognition method which is based on Personal Facial Expression Space (PFES) is presented. The multidimensional scaling maps facial images as points in lower dimensions in PFES. It reflects personality of facial expressions as it is based on the peak instant of facial expression images of a specific person. In constructing PFES for a person, his/her whole normalized facial image is considered as a single pattern without block segmentation and differences of 2-D DCT coefficients from neutral facial image of the same person are used as features. Therefore, in the early part of the paper, separation characteristics of facial expressions in the frequency domain are analyzed using a still facial image database which consists of neutral, smile, anger, surprise and sadness facial images for each of 60 Japanese males (300 facial images). Results show that facial expression categories are well separated in the low frequency domain. PFES is constructed using multidimensional scaling by taking these low frequency domain of differences of 2-D DCT coefficients as features. On the PFES, trajectory of a facial image sequence of a person can be calculated in real time. Based on this trajectory, facial expressions can be recognized. Experimental results show the effectiveness of this method.

  11. 3D animation of facial plastic surgery based on computer graphics

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  12. Measured symmetry of facial 3D shape and perceived facial symmetry and attractiveness before and after orthognathic surgery.

    PubMed

    Ostwald, Julia; Berssenbrügge, Philipp; Dirksen, Dieter; Runte, Christoph; Wermker, Kai; Kleinheinz, Johannes; Jung, Susanne

    2015-05-01

    One aim of cranio-maxillo-facial surgery is to strive for an esthetical appearance. Do facial symmetry and attractiveness correlate? How are they affected by surgery? Within this study faces of patients with orthognathic surgery were captured and analyzed regarding their symmetry. A total of 25 faces of patients were measured three-dimensionally by an optical sensor using the fringe projection technique before and after orthognathic surgery. Based upon this data an asymmetry index was calculated for each case. In order to gather subjective ratings each face was presented to 100 independent test subjects in a 3D rotation sequence. Those were asked to rate the symmetry and the attractiveness of the faces. It was analyzed to what extend the ratings correlate with the measured asymmetry indices and whether pre- and post-surgical data differ. The measured asymmetry indices correlate significantly with the subjective ratings of both items. The measured symmetry as well as the rated symmetry and attractiveness increased on average after surgery. The increase of the ratings was even statistically significant. A larger enhancement of symmetry is achieved in pre-surgical strongly asymmetric faces than in rather symmetric faces. PMID:25841308

  13. A coordinate-free method for the analysis of 3D facial change

    NASA Astrophysics Data System (ADS)

    Mao, Zhili; Siebert, Jan Paul; Cockshott, W. Paul; Ayoub, Ashraf Farouk

    2004-05-01

    Euclidean Distance Matrix Analysis (EDMA) is widely held as the most important coordinate-free method by which to analyze landmarks. It has been used extensively in the field of medical anthropometry and has already produced many useful results. Unfortunately this method renders little information regarding the surface on which these points are located and accordingly is inadequate for the 3D analysis of surface anatomy. Here we shall present a new inverse surface flatness metric, the ratio between the Geodesic and the Euclidean inter-landmark distances. Because this metric also only reflects one aspect of three-dimensional shape, i.e. surface flatness, we have combined it with the Euclidean distance to investigate 3D facial change. The goal of this investigation is to be able to analyze three-dimensional facial change in terms of bilateral symmetry as encoded both by surface flatness and by geometric configuration. Our initial study, based on 25 models of surgically managed children (unilateral cleft lip repair) and 40 models of control children at the age of 2 years, indicates that the faces of the surgically managed group were found to be significantly less symmetric than those of the control group in terms of surface flatness, geometric configuration and overall symmetry.

  14. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. PMID:23218511

  15. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  16. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  17. Robust facial expression recognition via compressive sensing.

    PubMed

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks. PMID:22737035

  18. Do Facial Expressions Develop before Birth?

    PubMed Central

    Reissland, Nadja; Francis, Brian; Mason, James; Lincoln, Karen

    2011-01-01

    Background Fetal facial development is essential not only for postnatal bonding between parents and child, but also theoretically for the study of the origins of affect. However, how such movements become coordinated is poorly understood. 4-D ultrasound visualisation allows an objective coding of fetal facial movements. Methodology/Findings Based on research using facial muscle movements to code recognisable facial expressions in adults and adapted for infants, we defined two distinct fetal facial movements, namely “cry-face-gestalt” and “laughter- gestalt,” both made up of up to 7 distinct facial movements. In this conceptual study, two healthy fetuses were then scanned at different gestational ages in the second and third trimester. We observed that the number and complexity of simultaneous movements increased with gestational age. Thus, between 24 and 35 weeks the mean number of co-occurrences of 3 or more facial movements increased from 7% to 69%. Recognisable facial expressions were also observed to develop. Between 24 and 35 weeks the number of co-occurrences of 3 or more movements making up a “cry-face gestalt” facial movement increased from 0% to 42%. Similarly the number of co-occurrences of 3 or more facial movements combining to a “laughter-face gestalt” increased from 0% to 35%. These changes over age were all highly significant. Significance This research provides the first evidence of developmental progression from individual unrelated facial movements toward fetal facial gestalts. We propose that there is considerable potential of this method for assessing fetal development: Subsequent discrimination of normal and abnormal fetal facial development might identify health problems in utero. PMID:21904607

  19. The identification of unfolding facial expressions.

    PubMed

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration. PMID:23025158

  20. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  1. Facial Expressivity in Infants of Depressed Mothers.

    ERIC Educational Resources Information Center

    Pickens, Jeffrey; Field, Tiffany

    1993-01-01

    Facial expressions were examined in 84 3-month-old infants of mothers classified as depressed, nondepressed, or low scoring on the Beck Depression Inventory. Infants of both depressed and low-scoring mothers showed significantly more sadness and anger expressions and fewer interest expressions than infants of nondepressed mothers. (Author/MDM)

  2. Facial expression recognition in perceptual color space.

    PubMed

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation. PMID:22575677

  3. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data. PMID:26355331

  4. Hereditary family signature of facial expression

    PubMed Central

    Peleg, Gili; Katzir, Gadi; Peleg, Ofer; Kamara, Michal; Brodsky, Leonid; Hel-Or, Hagit; Keren, Daniel; Nevo, Eviatar

    2006-01-01

    Although facial expressions of emotion are universal, individual differences create a facial expression “signature” for each person; but, is there a unique family facial expression signature? Only a few family studies on the heredity of facial expressions have been performed, none of which compared the gestalt of movements in various emotional states; they compared only a few movements in one or two emotional states. No studies, to our knowledge, have compared movements of congenitally blind subjects with their relatives to our knowledge. Using two types of analyses, we show a correlation between movements of congenitally blind subjects with those of their relatives in think-concentrate, sadness, anger, disgust, joy, and surprise and provide evidence for a unique family facial expression signature. In the analysis “in-out family test,” a particular movement was compared each time across subjects. Results show that the frequency of occurrence of a movement of a congenitally blind subject in his family is significantly higher than that outside of his family in think-concentrate, sadness, and anger. In the analysis “the classification test,” in which congenitally blind subjects were classified to their families according to the gestalt of movements, results show 80% correct classification over the entire interview and 75% in anger. Analysis of the movements' frequencies in anger revealed a correlation between the movements' frequencies of congenitally blind individuals and those of their relatives. This study anticipates discovering genes that influence facial expressions, understanding their evolutionary significance, and elucidating repair mechanisms for syndromes lacking facial expression, such as autism. PMID:17043232

  5. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  6. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    PubMed Central

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  7. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  8. Biased Facial Expression Interpretation in Shy Children

    ERIC Educational Resources Information Center

    Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy

    2016-01-01

    The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…

  9. Visualization and Analysis of 3D Gene Expression Data

    SciTech Connect

    Bethel, E. Wes; Rubel, Oliver; Weber, Gunther H.; Hamann, Bernd; Hagen, Hans

    2007-10-25

    Recent methods for extracting precise measurements ofspatial gene expression patterns from three-dimensional (3D) image dataopens the way for new analysis of the complex gene regulatory networkscontrolling animal development. To support analysis of this novel andhighly complex data we developed PointCloudXplore (PCX), an integratedvisualization framework that supports dedicated multi-modal, physical andinformation visualization views along with algorithms to aid in analyzingthe relationships between gene expression levels. Using PCX, we helpedour science stakeholders to address many questions in 3D gene expressionresearch, e.g., to objectively define spatial pattern boundaries andtemporal profiles of genes and to analyze how mRNA patterns arecontrolled by their regulatory transcription factors.

  10. Stereoscopy Amplifies Emotions Elicited by Facial Expressions

    PubMed Central

    Kätsyri, Jari; Häkkinen, Jukka

    2015-01-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants’ gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15–65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions. PMID:27551358

  11. Stereoscopy Amplifies Emotions Elicited by Facial Expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Häkkinen, Jukka

    2015-12-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants' gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15-65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions. PMID:27551358

  12. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  13. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  14. Categorical Perception of Affective and Linguistic Facial Expressions

    ERIC Educational Resources Information Center

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  15. A generic 3D kinetic model of gene expression

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2012-04-01

    Recent experiments show that mRNAs and proteins can be localized both in prokaryotic and eukaryotic cells. To describe such situations, I present a 3D mean-field kinetic model aimed primarily at gene expression in prokaryotic cells, including the formation of mRNA, its translation into protein, and slow diffusion of these species. Under steady-state conditions, the mRNA and protein spatial distribution is described by simple exponential functions. The protein concentration near the gene transcribed into mRNA is shown to depend on the protein and mRNA diffusion coefficients and degradation rate constants.

  16. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  17. Mapping the development of facial expression recognition.

    PubMed

    Rodger, Helen; Vizioli, Luca; Ouyang, Xinyi; Caldara, Roberto

    2015-11-01

    Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations. PMID:25704672

  18. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  19. Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.

    PubMed

    Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  20. Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study

    PubMed Central

    Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  1. Categorical perception of affective and linguistic facial expressions

    PubMed Central

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers' response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience. PMID:19111287

  2. Role of facial expressions in social interactions.

    PubMed

    Frith, Chris

    2009-12-12

    The expressions we see in the faces of others engage a number of different cognitive processes. Emotional expressions elicit rapid responses, which often imitate the emotion in the observed face. These effects can even occur for faces presented in such a way that the observer is not aware of them. We are also very good at explicitly recognizing and describing the emotion being expressed. A recent study, contrasting human and humanoid robot facial expressions, suggests that people can recognize the expressions made by the robot explicitly, but may not show the automatic, implicit response. The emotional expressions presented by faces are not simply reflexive, but also have a communicative component. For example, empathic expressions of pain are not simply a reflexive response to the sight of pain in another, since they are exaggerated when the empathizer knows he or she is being observed. It seems that we want people to know that we are empathic. Of especial importance among facial expressions are ostensive gestures such as the eyebrow flash, which indicate the intention to communicate. These gestures indicate, first, that the sender is to be trusted and, second, that any following signals are of importance to the receiver. PMID:19884140

  3. Suitable models for face geometry normalization in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  4. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  5. A comparison of facial expression properties in five hylobatid species.

    PubMed

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-07-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. PMID:24395677

  6. Automated Facial Action Coding System for Dynamic Analysis of Facial Expressions in Neuropsychiatric Disorders

    PubMed Central

    Hamm, Jihun; Kohler, Christian G.; Gur, Ruben C.; Verma, Ragini

    2011-01-01

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen’s Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, which can facilitate statistical study of large populations in disorders affecting facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity. PMID:21741407

  7. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  8. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  9. Sex differences in perception of invisible facial expressions.

    PubMed

    Hong, Sang Wook; Yoon, K Lira; Peaco, Sophia

    2015-01-01

    Previous research indicates that women are better at recognizing facial expressions than men. In the current study, we examined whether this female advantage in the processing of facial expressions also occurs at the unconscious level. In two studies, participants performed a simple detection task and a 4-AFC task while faces were rendered invisible by continuous flash suppression. When faces with full intensity expressions were suppressed, there was no significant sex difference in the time of breakup of suppression (Study 1). However, when suppressed faces depicted low intensity expressions, suppression broke up earlier in men than women, indicating that men may be more sensitive to facial features related to mild facial expressions (Study 2). The current findings suggest that the female advantage in processing of facial expressions is absent in unconscious processing of emotional information. The female advantage in facial expression processing may require conscious perception of faces. PMID:25883583

  10. Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Ianeş, Emilia; Roşu, Doina

    2010-09-01

    This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.

  11. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  12. Adults' responsiveness to children's facial expressions.

    PubMed

    Aradhye, Chinmay; Vonk, Jennifer; Arida, Danielle

    2015-07-01

    We investigated the effect of young children's (hereafter children's) facial expressions on adult responsiveness. In Study 1, 131 undergraduate students from a midsized university in the midwestern United States rated children's images and videos with smiling, crying, or neutral expressions on cuteness, likelihood to adopt, and participants' experienced distress. Looking times at images and videos along with perception of cuteness, likelihood to adopt, and experienced distress using 10-point Likert scales were measured. Videos of smiling children were rated as cuter and more likely to be adopted and were viewed for longer times compared with videos of crying children, which evoked more distress. In Study 2, we recorded responses from 101 of the same participants in an online survey measuring gender role identity, empathy, and perspective taking. Higher levels of femininity (as measured by Bem's Sex Role Inventory) predicted higher "likely to adopt" ratings for crying images. These findings indicate that adult perception of children and motivation to nurture are affected by both children's facial expressions and adult characteristics and build on existing literature to demonstrate that children may use expressions to manipulate the motivations of even non-kin adults to direct attention toward and perhaps nurture young children. PMID:25838165

  13. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities. PMID:26542827

  14. The Neuropsychology of Facial Identity and Facial Expression in Children with Mental Retardation

    ERIC Educational Resources Information Center

    Singh, Nirbhay N.; Oswald, Donald P.; Lancioni, Giulio E.; Ellis, Cynthia R.; Sage, Monica; Ferris, Jennifer R.

    2005-01-01

    We indirectly determined how children with mental retardation analyze facial identity and facial expression, and if these analyses of identity and expression were controlled by independent cognitive processes. In a reaction time study, 20 children with mild mental retardation were required to determine if simultaneously presented photographs of…

  15. Facial expression recognition using constructive neural networks

    NASA Astrophysics Data System (ADS)

    Ma, Liying; Khorasani, Khashayar

    2001-08-01

    The computer-based recognition of facial expressions has been an active area of research for quite a long time. The ultimate goal is to realize intelligent and transparent communications between human beings and machines. The neural network (NN) based recognition methods have been found to be particularly promising, since NN is capable of implementing mapping from the feature space of face images to the facial expression space. However, finding a proper network size has always been a frustrating and time consuming experience for NN developers. In this paper, we propose to use the constructive one-hidden-layer feed forward neural networks (OHL-FNNs) to overcome this problem. The constructive OHL-FNN will obtain in a systematic way a proper network size which is required by the complexity of the problem being considered. Furthermore, the computational cost involved in network training can be considerably reduced when compared to standard back- propagation (BP) based FNNs. In our proposed technique, the 2-dimensional discrete cosine transform (2-D DCT) is applied over the entire difference face image for extracting relevant features for recognition purpose. The lower- frequency 2-D DCT coefficients obtained are then used to train a constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive OHL-FNN. An input-side pruning technique previously proposed by the authors is also incorporated into the constructive learning process to reduce the network size without sacrificing the performance of the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having the resulting network. The proposed technique is applied to a database consisting of images of 60 men, each having 5 facial expression images (neutral, smile, anger, sadness, and surprise). Images of 40 men are used for network training, and the remaining images are used for generalization and

  16. Dynamic facial expressions are processed holistically, but not more holistically than static facial expressions.

    PubMed

    Tobin, Alanna; Favelle, Simone; Palermo, Romina

    2016-09-01

    There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information. PMID:26208146

  17. Effects of facial expression on working memory.

    PubMed

    Stiernströmer, Emelie S; Wolgast, Martin; Johansson, Mikael

    2016-08-01

    In long-term memory (LTM) emotional content may both enhance and impair memory, however, disagreement remains whether emotional content exerts different effects on the ability to maintain and manipulate information over short intervals. Using a working-memory (WM) recognition task requiring the monitoring of faces displaying facial expressions of emotion, participants judged each face as identical (target) or not (non-target) to that presented 2 trials back (2-back). Negative expression was better and faster recognised, illustrated by higher target discriminability and target detection. Positive and negative expressions also induced a more liberal detection bias compared with neutral. Taking the preceding item into account, additional accuracy impairment (negative preceding negative target) and enhancement effects (negative or positive preceding neutral target) appeared. This illustrates a differential modulation of WM based on the affective tone of the target (mirroring LTM enhancement- and recognition bias effects), and of the preceding item (enhanced and impaired target detection). PMID:26238683

  18. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  19. From facial expressions to bodily gestures

    PubMed Central

    2016-01-01

    This article aims to determine to what extent photographic practices in psychology, psychiatry and physiology contributed to the definition of the external bodily signs of passions and emotions in the second half of the 19th century in France. Bridging the gap between recent research in the history of emotions and photographic history, the following analyses focus on the photographic production of scientists and photographers who made significant contributions to the study of expressions and gestures, namely Duchenne de Boulogne, Charles Darwin, Paul Richer and Albert Londe. This article argues that photography became a key technology in their works due to the adequateness of the exposure time of different cameras to the duration of the bodily manifestations to be recorded, and that these uses constituted facial expressions and bodily gestures as particular objects for the scientific study. PMID:26900264

  20. Facial expression recognition in crested macaques (Macaca nigra).

    PubMed

    Micheletta, Jérôme; Whitehouse, Jamie; Parr, Lisa A; Waller, Bridget M

    2015-07-01

    Facial expressions are a main communication channel used by many different species of primate. Despite this, we know relatively little about how primates discriminate between different facial expressions, and most of what we do know comes from a restricted number of well-studied species. In this study, three crested macaques (Macaca nigra) took part in matching-to-sample tasks where they had to discriminate different facial expressions. In a first experiment, the macaques had to match a photograph of a facial expression to another exemplar of the same expression produced by a different individual, against examples of one of three other types of expressions and neutral faces. In a second experiment, they had to match a dynamic video recording of a facial expression to a still photograph of another exemplar of the same facial expression produced by another individual, also against one of four other expressions. The macaques performed above chance in both tasks, identifying expressions as belonging to the same category regardless of individual identity. Using matrix correlations and multidimensional scaling, we analysed the pattern of errors to see whether overall similarity between facial expressions and/or specific morphological features caused the macaques to confuse facial expressions. Overall similarity, measured with the macaque facial action coding system (maqFACS), did not correlate with performances. Instead, functional similarities between facial expressions could be responsible for the observed pattern of error. These results expand previous findings to a novel primate species and highlight the potential of using video stimuli to investigate the perception and categorisation of visual signals in primates. PMID:25821924

  1. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  2. Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation.

    PubMed

    Guo, Yimo; Zhao, Guoying; Pietikainen, Matti

    2016-05-01

    In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison. PMID:26955032

  3. A new 3D method for measuring cranio-facial relationships with cone beam computed tomography (CBCT)

    PubMed Central

    Cibrián, Rosa; Gandia, Jose L.; Paredes, Vanessa

    2013-01-01

    Objectives: CBCT systems, with their high precision 3D reconstructions, 1:1 images and accuracy in locating cephalometric landmarks, allows us to evaluate measurements from craniofacial structures, so enabling us to replace the anthropometric methods or bidimensional methods used until now. The aims are to analyse cranio-facial relationships in a sample of patients who had previously undergone a CBCT and create a new 3D cephalometric method for assessing and measuring patients. Study Design: 90 patients who had a CBCT (i-Cat®) as a diagnostic register were selected. 12 cephalometric landmarks on the three spatial planes (X,Y,Z) were defined and 21 linear measurements were established. Using these measurements, 7 triangles were described and analysed. With the sides of the triangles: (CdR-Me-CdL); (FzR-Me-FzL); (GoR-N-GoL); and the Gl-Me distance, the ratios between them were analysed. In addition, 4 triangles in the mandible were measured (body: GoR-DB-Me and GoL-DB-Me and ramus: KrR-CdR-GoR and KrL-CdL-GoL). Results: When analyzing the sides of the CdR-Me-CdL triangle, it was found that the 69.33% of the patients could be considered symmetric. Regarding the ratios between the sides of the following triangles: CdR-Me-CdL, FzR-Me-FzL, GoR-N-GoL and the Gl-Me distance, it was found that almost all ratios were close to 1:1 except between the CdR-CdL side with respect the rest of the sides. With regard to the ratios of the 4 triangles of the mandible, it was found that the most symmetrical relationships were those corresponding to the sides of the body of the mandible and the most asymmetrical ones were those corresponding to the base of such triangles. Conclusions: A new method for assessing cranio-facial relationshps using CBCT has been established. It could be used for diverse purposes including diagnosis and treatment planning. Key words:Craniofacial relationship, CBCT, 3D cephalometry. PMID:23524427

  4. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  5. Enhanced subliminal emotional responses to dynamic facial expressions.

    PubMed

    Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi

    2014-01-01

    Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing. PMID:25250001

  6. The efficiency of dynamic and static facial expression recognition

    PubMed Central

    Gold, Jason M.; Barker, Jarrett D.; Barr, Shawn; Bittner, Jennifer L.; Bromfield, W. Drew; Chu, Nicole; Goode, Roy A.; Lee, Doori; Simmons, Michael; Srinath, Aparna

    2012-01-01

    Unlike frozen snapshots of facial expressions that we often see in photographs, natural facial expressions are dynamic events that unfold in a particular fashion over time. But how important are the temporal properties of expressions for our ability to reliably extract information about a person's emotional state? We addressed this question experimentally by gauging human performance in recognizing facial expressions with varying temporal properties relative to that of a statistically optimal (“ideal”) observer. We found that people recognized emotions just as efficiently when viewing them as naturally evolving dynamic events, temporally reversed events, temporally randomized events, or single images frozen in time. Our results suggest that the dynamic properties of human facial movements may play a surprisingly small role in people's ability to infer the emotional states of others from their facial expressions. PMID:23620533

  7. Facial Expressivity at 4 Months: A Context by Expression Analysis.

    PubMed

    Bennett, David S; Bendersky, Margaret; Lewis, Michael

    2002-01-01

    The specificity predicted by differential emotions theory (DET) for early facial expressions in response to 5 different eliciting situations was studied in a sample of 4-month-old infants (n = 150). Infants were videotaped during tickle, sour taste, jack-in-the-box, arm restraint, and masked-stranger situations and their expressions were coded second by second. Infants showed a variety of facial expressions in each situation; however, more infants exhibited positive (joy and surprise) than negative expressions (anger, disgust, fear, and sadness) across all situations except sour taste. Consistent with DET-predicted specificity, joy expressions were the most common in response to tickling, and were less common in response to other situations. Surprise expressions were the most common in response to the jack-in-the-box, as predicted, but also were the most common in response to the arm restraint and masked-stranger situations, indicating a lack of specificity. No evidence of predicted specificity was found for anger, disgust, fear, and sadness expressions. Evidence of individual differences in expressivity within situations, as well as stability in the pattern across situations, underscores the need to examine both child and contextual factors in studying emotional development. The results provide little support for the DET postulate of situational specificity and suggest that a synthesis of differential emotions and dynamic systems theories of emotional expression should be considered. PMID:16878184

  8. The face is not an empty canvas: how facial expressions interact with facial appearance

    PubMed Central

    Hess, Ursula; Adams, Reginald B.; Kleck, Robert E.

    2009-01-01

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions. PMID:19884144

  9. Four not six: Revealing culturally common facial expressions of emotion.

    PubMed

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record PMID:27077757

  10. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  11. Parameterized Facial Expression Synthesis Based on MPEG-4

    NASA Astrophysics Data System (ADS)

    Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos

    2002-12-01

    In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

  12. Viewing distance matter to perceived intensity of facial expressions

    PubMed Central

    Gerhardsson, Andreas; Högman, Lennart; Fischer, Håkan

    2015-01-01

    In our daily perception of facial expressions, we depend on an ability to generalize across the varied distances at which they may appear. This is important to how we interpret the quality and the intensity of the expression. Previous research has not investigated whether this so called perceptual constancy also applies to the experienced intensity of facial expressions. Using a psychophysical measure (Borg CR100 scale) the present study aimed to further investigate perceptual constancy of happy and angry facial expressions at varied sizes, which is a proxy for varying viewing distances. Seventy-one (42 females) participants rated the intensity and valence of facial expressions varying in distance and intensity. The results demonstrated that the perceived intensity (PI) of the emotional facial expression was dependent on the distance of the face and the person perceiving it. An interaction effect was noted, indicating that close-up faces are perceived as more intense than faces at a distance and that this effect is stronger the more intense the facial expression truly is. The present study raises considerations regarding constancy of the PI of happy and angry facial expressions at varied distances. PMID:26191035

  13. Shadows Alter Facial Expressions of Noh Masks

    PubMed Central

    Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo

    2013-01-01

    Background A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers’ recognition of the emotional expressions. Methodology/Principal Findings In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Conclusions/Significance Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa’s smile. They also agree with the aesthetic principle of Japanese traditional art “yugen (profound grace and subtlety)”, which highly appreciates subtle emotional expressions in the darkness. PMID:23940748

  14. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  15. Expression-invariant three-dimensional face reconstruction from a single image by facial expression generic elastic models

    NASA Astrophysics Data System (ADS)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2014-09-01

    An efficient method for expression-invariant three-dimensional (3-D) face reconstruction from a frontal face image with a variety of facial expressions (FE) using the FE generic elastic model (GEM) is proposed. Three generic models are employed for FE modeling in the generic elastic model (GEM) framework, which are combined based on the similarity distance around the lips. Exclusively, FE-GEM demonstrated that it is more precisely able to estimate a 3-D model of a frontal face, attaining a more robust and better quality 3-D face reconstruction under a variety of FEs compared to the original GEM approach. It is tested on an available 3-D face database and its accuracy and robustness are demonstrated compared to the GEM approach under a variety of FEs. Also, the FE-GEM method is tested on available two-dimensional face databases and a new synthesized pose is generated from gallery images for handling pose variations in face recognition.

  16. Facial expression of emotions in borderline personality disorder and depression.

    PubMed

    Renneberg, Babette; Heyn, Katrin; Gebhard, Rita; Bachmann, Silke

    2005-09-01

    Borderline personality disorder (BPD) is characterized by marked problems in interpersonal relationships and emotion regulation. The assumption of emotional hyper-reactivity in BPD is tested regarding the facial expression of emotions, an aspect highly relevant for communication processes and a central feature of emotion regulation. Facial expressions of emotions are examined in a group of 30 female inpatients with BPD, 27 women with major depression and 30 non-patient female controls. Participants were videotaped while watching two short movie sequences, inducing either positive or negative emotions. Frequency of emotional facial expressions and intensity of happiness expressions were examined, using the Emotional Facial Action Coding System (EMFACS-7, Friesen & Ekman, EMFACS-7: Emotional Facial Action Coding System, Version 7. Unpublished manual, 1984). Group differences were analyzed for the negative and the positive mood-induction procedure separately. Results indicate that BPD patients reacted similar to depressed patients with reduced facial expressiveness to both films. The highest emotional facial activity to both films and most intense happiness expressions were displayed by the non-clinical control group. Current findings contradict the assumption of a general hyper-reactivity to emotional stimuli in patients with BPD. PMID:15950175

  17. Automatic decoding of facial movements reveals deceptive pain expressions

    PubMed Central

    Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang

    2014-01-01

    Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830

  18. Impaired overt facial mimicry in response to dynamic facial expressions in high-functioning autism spectrum disorders.

    PubMed

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-05-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in high-functioning individuals with ASD and controls to dynamic and static facial expressions of anger and happiness. Visual coding of facial muscle activity and the subjective impression ratings showed reduced congruent responses to dynamic expressions in the ASD group. Additionally, this decline was related to social dysfunction. These results suggest that impairment in overt facial mimicry in response to others' dynamic facial expressions may underlie difficulties in reciprocal social interaction among individuals with ASD. PMID:25374131

  19. Perception of temporal asymmetries in dynamic facial expressions

    PubMed Central

    Reinl, Maren; Bartels, Andreas

    2015-01-01

    In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline). This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline-reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioral as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries. PMID:26300807

  20. How Facial Expressions of Emotion Affect Distance Perception.

    PubMed

    Kim, Nam-Gyoon; Son, Heejung

    2015-01-01

    Facial expressions of emotion are thought to convey expressers' behavioral intentions, thus priming observers' approach and avoidance tendencies appropriately. The present study examined whether detecting expressions of behavioral intent influences perceivers' estimation of the expresser's distance from them. Eighteen undergraduates (nine male and nine female) participated in the study. Six facial expressions were chosen on the basis of degree of threat-anger, hate (threatening expressions), shame, surprise (neutral expressions), pleasure, and joy (safe expressions). Each facial expression was presented on a tablet PC held by an assistant covered by a black drape who stood 1, 2, or 3 m away from participants. Participants performed a visual matching task to report the perceived distance. Results showed that facial expression influenced distance estimation, with faces exhibiting threatening or safe expressions judged closer than those showing neutral expressions. Females' judgments were more likely to be influenced; but these influences largely disappeared beyond the 2 m distance. These results suggest that facial expressions of emotion (particularly threatening or safe emotions) influence others' (especially females') distance estimations but only within close proximity. PMID:26635708

  1. Cognitive penetrability and emotion recognition in human facial expressions

    PubMed Central

    Marchi, Francesco

    2015-01-01

    Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration (CP) of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on CP, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept CP in some cases of emotion recognition. Finally, we discuss a recently proposed mechanism for CP in the face-based recognition of emotion. PMID:26150796

  2. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  3. Postauricular and eyeblink startle responses to facial expressions.

    PubMed

    Hess, Ursula; Sabourin, Gabrielle; Kleck, Robert E

    2007-05-01

    Emotional facial expressions have affective significance. Smiles, for example, are perceived as positive and responded to with increased happiness, whereas angry expressions are perceived as negative and threatening. Yet, these perceptions are modulated in part by facial morphological cues related to the sex of the expresser. The present research assessed both eyeblink startle and the postauricular reflex during happy and angry expressions by men and women. For this 14 male and 16 female undergraduates saw happy, neutral, and angry facial expressions as well as positive and negative pictures. The postauricular reflex was potentiated during happy expressions and inhibited during anger expressions; however, as expected, this pattern was more clearly found for female expressers. Conversely, the expected pattern of eyeblink startle potentiation during angry faces and inhibition during happy faces was found only for male expressers. PMID:17371491

  4. Macaques can predict social outcomes from facial expressions.

    PubMed

    Waller, Bridget M; Whitehouse, Jamie; Micheletta, Jérôme

    2016-09-01

    There is widespread acceptance that facial expressions are useful in social interactions, but empirical demonstration of their adaptive function has remained elusive. Here, we investigated whether macaques can use the facial expressions of others to predict the future outcomes of social interaction. Crested macaques (Macaca nigra) were shown an approach between two unknown individuals on a touchscreen and were required to choose between one of two potential social outcomes. The facial expressions of the actors were manipulated in the last frame of the video. One subject reached the experimental stage and accurately predicted different social outcomes depending on which facial expressions the actors displayed. The bared-teeth display (homologue of the human smile) was most strongly associated with predicted friendly outcomes. Contrary to our predictions, screams and threat faces were not associated more with conflict outcomes. Overall, therefore, the presence of any facial expression (compared to neutral) caused the subject to choose friendly outcomes more than negative outcomes. Facial expression in general, therefore, indicated a reduced likelihood of social conflict. The findings dispute traditional theories that view expressions only as indicators of present emotion and instead suggest that expressions form part of complex social interactions where individuals think beyond the present. PMID:27155662

  5. Facial expression recognition using kernel canonical correlation analysis (KCCA).

    PubMed

    Zheng, Wenming; Zhou, Xiaoyan; Zou, Cairong; Zhao, Li

    2006-01-01

    In this correspondence, we address the facial expression recognition problem using kernel canonical correlation analysis (KCCA). Following the method proposed by Lyons et al. and Zhang et al., we manually locate 34 landmark points from each facial image and then convert these geometric points into a labeled graph (LG) vector using the Gabor wavelet transformation method to represent the facial features. On the other hand, for each training facial image, the semantic ratings describing the basic expressions are combined into a six-dimensional semantic expression vector. Learning the correlation between the LG vector and the semantic expression vector is performed by KCCA. According to this correlation, we estimate the associated semantic expression vector of a given test image and then perform the expression classification according to this estimated semantic expression vector. Moreover, we also propose an improved KCCA algorithm to tackle the singularity problem of the Gram matrix. The experimental results on the Japanese female facial expression database and the Ekman's "Pictures of Facial Affect" database illustrate the effectiveness of the proposed method. PMID:16526490

  6. Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men.

    PubMed

    Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C

    2010-11-01

    Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. PMID:20728864

  7. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    ERIC Educational Resources Information Center

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  8. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  9. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  10. How Facial Expressions of Emotion Affect Distance Perception

    PubMed Central

    Kim, Nam-Gyoon; Son, Heejung

    2015-01-01

    Facial expressions of emotion are thought to convey expressers’ behavioral intentions, thus priming observers’ approach and avoidance tendencies appropriately. The present study examined whether detecting expressions of behavioral intent influences perceivers’ estimation of the expresser’s distance from them. Eighteen undergraduates (nine male and nine female) participated in the study. Six facial expressions were chosen on the basis of degree of threat—anger, hate (threatening expressions), shame, surprise (neutral expressions), pleasure, and joy (safe expressions). Each facial expression was presented on a tablet PC held by an assistant covered by a black drape who stood 1, 2, or 3 m away from participants. Participants performed a visual matching task to report the perceived distance. Results showed that facial expression influenced distance estimation, with faces exhibiting threatening or safe expressions judged closer than those showing neutral expressions. Females’ judgments were more likely to be influenced; but these influences largely disappeared beyond the 2 m distance. These results suggest that facial expressions of emotion (particularly threatening or safe emotions) influence others’ (especially females’) distance estimations but only within close proximity. PMID:26635708

  11. Identification of emotional facial expressions following recovery from depression.

    PubMed

    LeMoult, Joelle; Joormann, Jutta; Sherdell, Lindsey; Wright, Yamanda; Gotlib, Ian H

    2009-11-01

    This study investigated the identification of facial expressions of emotion in currently nondepressed participants who had a history of recurrent depressive episodes (recurrent major depression; RMD) and never-depressed control participants (CTL). Following a negative mood induction, participants were presented with faces whose expressions slowly changed from neutral to full intensity. Identification of facial expressions was measured by the intensity of the expression at which participants could accurately identify whether faces expressed happiness, sadness, or anger. There were no group differences in the identification of sad or angry expressions. Compared with CTL participants, however, RMD participants required significantly greater emotional intensity in the faces to correctly identify happy expressions. These results indicate that biases in the processing of emotional facial expressions are evident even after individuals have recovered from a depressive episode. PMID:19899852

  12. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics. PMID:17546747

  13. Facial expressions of singers influence perceived pitch relations.

    PubMed

    Thompson, William Forde; Russo, Frank A; Livingstone, Steven R

    2010-06-01

    In four experiments, we examined whether facial expressions used while singing carry musical information that can be "read" by viewers. In Experiment 1, participants saw silent video recordings of sung melodic intervals and judged the size of the interval they imagined the performers to be singing. Participants discriminated interval sizes on the basis of facial expression and discriminated large from small intervals when only head movements were visible. Experiments 2 and 3 confirmed that facial expressions influenced judgments even when the auditory signal was available. When matched with the facial expressions used to perform a large interval, audio recordings of sung intervals were judged as being larger than when matched with the facial expressions used to perform a small interval. The effect was not diminished when a secondary task was introduced, suggesting that audio-visual integration is not dependent on attention. Experiment 4 confirmed that the secondary task reduced participants' ability to make judgments that require conscious attention. The results provide the first evidence that facial expressions influence perceived pitch relations. PMID:20551352

  14. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  15. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  16. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  17. Improved categorization of subtle facial expressions modulates Late Positive Potential.

    PubMed

    Pollux, P M J

    2016-05-13

    Biases in facial expression recognition can be reduced successfully using feedback-based training tasks. Here we investigate with event-related potentials (ERPs) at which stages of stimulus processing emotion-related modulations are influenced by training. Categorization of subtle facial expressions (morphed from neutral to happy, sad or surprise) was trained with correct-response feedback on each trial. ERPs were recorded before and after training while participants categorized facial expressions without response feedback. Behavioral data demonstrated large improvements in categorization of subtle facial expression which transferred to new face models not used during training. ERPs were modulated by training from 450 ms post-stimulus onward, characterized by a more gradual increase in P3b/Late Positive Potential (LPP) amplitude as expression intensity increased. This effect was indistinguishable for faces used for training and for new faces. It was proposed that training elicited a more fine-grained analysis of facial information for all subtle expressions, resulting in improved recognition and enhanced emotional motivational salience (reflected in P3b/LPP amplitude) of faces previously categorized as expressing no emotion. PMID:26912280

  18. Detecting deception in facial expressions of pain: accuracy and training.

    PubMed

    Hill, Marilyn L; Craig, Kenneth D

    2004-01-01

    Clinicians tend to assign greater weight to nonverbal expression than to patient self-report when judging the location and severity of pain. However, patients can be successful at dissimulating facial expressions of pain, as posed expressions resemble genuine expressions in the frequency and intensity of pain-related facial actions. The present research examined individual differences in the ability to discriminate genuine and deceptive facial pain displays and whether different models of training in cues to deception would improve detection skills. Judges (60 male, 60 female) were randomly assigned to 1 of 4 experimental groups: 1) control; 2) corrective feedback; 3) deception training; and 4) deception training plus feedback. Judges were shown 4 videotaped facial expressions for each chronic pain patient: neutral expressions, genuine pain instigated by physiotherapy range of motion assessment, masked pain, and faked pain. For each condition, the participants rated pain intensity and unpleasantness, decided which category each of the 4 video clips represented, and described cues they used to arrive at decisions. There were significant individual differences in accuracy, with females more accurate than males, but accuracy was unrelated to past pain experience, empathy, or the number or type of facial cues used. Immediate corrective feedback led to significant improvements in participants' detection accuracy, whereas there was no support for the use of an information-based training program. PMID:15502685

  19. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia

    PubMed Central

    Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.

    2011-01-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662

  20. Learning Multiscale Active Facial Patches for Expression Analysis.

    PubMed

    Zhong, Lin; Liu, Qingshan; Yang, Peng; Huang, Junzhou; Metaxas, Dimitris N

    2015-08-01

    In this paper, we present a new idea to analyze facial expression by exploring some common and specific information among different expressions. Inspired by the observation that only a few facial parts are active in expression disclosure (e.g., around mouth, eye), we try to discover the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively. A two-stage multitask sparse learning (MTSL) framework is proposed to efficiently locate those discriminative patches. In the first stage MTSL, expression recognition tasks are combined to located common patches. Each of the tasks aims to find dominant patches for each expression. Secondly, two related tasks, facial expression recognition and face verification tasks, are coupled to learn specific facial patches for individual expression. The two-stage patch learning is performed on patches sampled by multiscale strategy. Extensive experiments validate the existence and significance of common and specific patches. Utilizing these learned patches, we achieve superior performances on expression recognition compared to the state-of-the-arts. PMID:25291808

  1. Facial expression recognition in rhesus monkeys, Macaca mulatta

    PubMed Central

    Parr, Lisa A.; Heintz, Matthew

    2010-01-01

    The ability to recognize and accurately interpret facial expressions is critically important for nonhuman primates that rely on these nonverbal signals for social communication. Despite this, little is known about how nonhuman primates, particularly monkeys, discriminate between facial expressions. In the present study, seven rhesus monkeys were required to discriminate four categories of conspecific facial expressions using a matching-to-sample task. In experiment 1, the matching pair showed identical photographs of facial expressions, paired with every other expression type as the nonmatch. The identity of the nonmatching stimulus monkey differed from the one in the sample. Subjects performed above chance on session 1, with no difference in performance across the four expression types. In experiment 2, the identity of all three monkeys differed in each trial, and a neutral portrait was also included as the nonmatching stimulus. Monkeys discriminated expressions across individual identity when the non-match was a neutral stimulus, but they had difficulty when the nonmatch was another expression type. We analysed the degree to which specific feature redundancy could account for these error patterns using a multidimensional scaling analysis which plotted the perceived dissimilarity between expression dyads along a two-dimensional axis. One axis appeared to represent mouth shape, stretched open versus funnelled, while the other appeared to represent a combination of lip retraction and mouth opening. These features alone, however, could not account for overall performance and suggest that monkeys do not rely solely on distinctive features to discriminate among different expressions. PMID:20228886

  2. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    ERIC Educational Resources Information Center

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  3. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  4. Compatibility between tones, head movements, and facial expressions.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2011-08-01

    The study tests the hypothesis of an embodied associative triangle among relative tone pitch (i.e., high or low tones), vertical movement, and facial emotion. In particular, it is tested whether relative pitch automatically activates facial expressions of happiness and anger as well as vertical head movements. Results show robust congruency effects: happiness expressions and upward head tilts are imitated faster when paired with high rather than low tones, while anger expressions and downward head tilts are imitated faster when paired with low rather than high tones. The results add to the growing evidence favoring an embodiment account that emphasizes multimodal representations as the basis of cognition, emotion, and action. PMID:21604874

  5. Three-year-olds' rapid facial electromyographic responses to emotional facial expressions and body postures.

    PubMed

    Geangu, Elena; Quadrelli, Ermanno; Conte, Stefania; Croci, Emanuela; Turati, Chiara

    2016-04-01

    Rapid facial reactions (RFRs) to observed emotional expressions are proposed to be involved in a wide array of socioemotional skills, from empathy to social communication. Two of the most persuasive theoretical accounts propose RFRs to rely either on motor resonance mechanisms or on more complex mechanisms involving affective processes. Previous studies demonstrated that presentation of facial and bodily expressions can generate rapid changes in adult and school-age children's muscle activity. However, to date there is little to no evidence to suggest the existence of emotional RFRs from infancy to preschool age. To investigate whether RFRs are driven by motor mimicry or could also be a result of emotional appraisal processes, we recorded facial electromyographic (EMG) activation from the zygomaticus major and frontalis medialis muscles to presentation of static facial and bodily expressions of emotions (i.e., happiness, anger, fear, and neutral) in 3-year-old children. Results showed no specific EMG activation in response to bodily emotion expressions. However, observing others' happy faces led to increased activation of the zygomaticus major and decreased activation of the frontalis medialis, whereas observing others' angry faces elicited the opposite pattern of activation. This study suggests that RFRs are the result of complex mechanisms in which both affective processes and motor resonance may play an important role. PMID:26687335

  6. Training Facial Expression Production in Children on the Autism Spectrum

    ERIC Educational Resources Information Center

    Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.

    2014-01-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…

  7. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues. PMID:21648329

  8. In vivo biomarker expression patterns are preserved in 3D cultures of Prostate Cancer

    SciTech Connect

    Windus, Louisa C.E.; Kiss, Debra L.; Glover, Tristan; Avery, Vicky M.

    2012-11-15

    Here we report that Prostate Cancer (PCa) cell-lines DU145, PC3, LNCaP and RWPE-1 grown in 3D matrices in contrast to conventional 2D monolayers, display distinct differences in cell morphology, proliferation and expression of important biomarker proteins associated with cancer progression. Consistent with in vivo growth rates, in 3D cultures, all PCa cell-lines were found to proliferate at significantly lower rates in comparison to their 2D counterparts. Moreover, when grown in a 3D matrix, metastatic PC3 cell-lines were found to mimic more precisely protein expression patterns of metastatic tumour formation as found in vivo. In comparison to the prostate epithelial cell-line RWPE-1, metastatic PC3 cell-lines exhibited a down-regulation of E-cadherin and {alpha}6 integrin expression and an up-regulation of N-cadherin, Vimentin and {beta}1 integrin expression and re-expressed non-transcriptionally active AR. In comparison to the non-invasive LNCaP cell-lines, PC3 cells were found to have an up-regulation of chemokine receptor CXCR4, consistent with a metastatic phenotype. In 2D cultures, there was little distinction in protein expression between metastatic, non-invasive and epithelial cells. These results suggest that 3D cultures are more representative of in vivo morphology and may serve as a more biologically relevant model in the drug discovery pipeline. -- Highlights: Black-Right-Pointing-Pointer We developed and optimised 3D culturing techniques for Prostate Cancer cell-lines. Black-Right-Pointing-Pointer We investigated biomarker expression in 2D versus 3D culture techniques. Black-Right-Pointing-Pointer Metastatic PC3 cells re-expressed non-transcriptionally active androgen receptor. Black-Right-Pointing-Pointer Metastatic PCa cell lines retain in vivo-like antigenic profiles in 3D cultures.

  9. Development of a System for Automatic Facial Expression Analysis

    NASA Astrophysics Data System (ADS)

    Diago, Luis A.; Kitaoka, Tetsuko; Hagiwara, Ichiro

    Automatic recognition of facial expressions can be an important component of natural human-machine interactions. While a lot of samples are desirable for estimating more accurately the feelings of a person (e.g. likeness) about a machine interface, in real world situation, only a small number of samples must be obtained because the high cost in collecting emotions from observed person. This paper proposes a system that solves this problem conforming to individual differences. A new method is developed for facial expression classification based on the combination of Holographic Neural Networks (HNN) and Type-2 Fuzzy Logic. For the recognition of emotions induced by facial expressions, compared with former HNN and Support Vector Machines (SVM) classifiers, proposed method achieved the best generalization performance using less learning time than SVM classifiers.

  10. Affective Simon effects using facial expressions as affective stimuli.

    PubMed

    De Houwer, J; Hermans, D; Eelen, P

    1998-01-01

    Two experiments are reported in which facial expressions were presented and participants were asked to respond with the word POSITIVE or NEGATIVE on the basis of a relevant feature of the facial stimuli while ignoring the valence of the expression. Results showed that reaction times were influenced by the match between the valence of the facial expression and the valence of the correct response when the identity of the presented person had to be determined in order to select the correct response, but not when the gender of the presented person was relevant. The present experiments illustrate the flexibility of the affective Simon paradigm and provide a further demonstration of the generalizability of the affective Simon effect. PMID:9677856

  11. Warsaw set of emotional facial expression pictures: a validation study of facial display photographs

    PubMed Central

    Olszanowski, Michal; Pochwatko, Grzegorz; Kuklinski, Krzysztof; Scibor-Rylski, Michal; Lewinski, Peter; Ohme, Rafal K.

    2015-01-01

    Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express “felt” emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g., accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU's codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for non-commercial use by request at http://www.emotional-face.org. PMID:25601846

  12. Training facial expression production in children on the autism spectrum.

    PubMed

    Gordon, Iris; Pierce, Matthew D; Bartlett, Marian S; Tanaka, James W

    2014-10-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer recognition system that analyzes the child's facial expression in real time. Before and after playing the Angry and Happy versions of FaceMaze, children posed "happy" and "angry" expressions. Naïve raters judged the post-FaceMaze "happy" and "angry" expressions of the ASD group as higher in quality than their pre-FaceMaze productions. Moreover, the post-game expressions of the ASD group were rated as equal in quality as the expressions of the TD group. PMID:24777287

  13. Human and computer recognition of facial expressions of emotion.

    PubMed

    Susskind, J M; Littlewort, G; Bartlett, M S; Movellan, J; Anderson, A K

    2007-01-01

    Neuropsychological and neuroimaging evidence suggests that the human brain contains facial expression recognition detectors specialized for specific discrete emotions. However, some human behavioral data suggest that humans recognize expressions as similar and not discrete entities. This latter observation has been taken to indicate that internal representations of facial expressions may be best characterized as varying along continuous underlying dimensions. To examine the potential compatibility of these two views, the present study compared human and support vector machine (SVM) facial expression recognition performance. Separate SVMs were trained to develop fully automatic optimal recognition of one of six basic emotional expressions in real-time with no explicit training on expression similarity. Performance revealed high recognition accuracy for expression prototypes. Without explicit training of similarity detection, magnitude of activation across each emotion-specific SVM captured human judgments of expression similarity. This evidence suggests that combinations of expert classifiers from separate internal neural representations result in similarity judgments between expressions, supporting the appearance of a continuous underlying dimensionality. Further, these data suggest similarity in expression meaning is supported by superficial similarities in expression appearance. PMID:16765997

  14. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    NASA Astrophysics Data System (ADS)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  15. Emotional facial expressions reduce neural adaptation to face identity.

    PubMed

    Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R

    2014-05-01

    In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing. PMID:23512931

  16. Processing emotional facial expressions: the role of anxiety and awareness.

    PubMed

    Fox, Elaine

    2002-03-01

    In this paper, the role of self-reported anxiety and degree of conscious awareness as determinants of the selective processing of affective facial expressions is investigated. In two experiments, an attentional bias toward fearful facial expressions was observed, although this bias was apparent only for those reporting high levels of trait anxiety and only when the emotional face was presented in the left visual field. This pattern was especially strong when the participants were unaware of the presence of the facial stimuli. In Experiment 3, a patient with right-hemisphere brain damage and visual extinction was presented with photographs of faces and fruits on unilateral and bilateral trials. On bilateral trials, it was found that faces produced less extinction than did fruits. Moreover, faces portraying a fearful or a happy expression tended to produce less extinction than did neutral expressions. This suggests that emotional facial expressions may be less dependent on attention to achieve awareness. The implications of these results for understanding the relations between attention, emotion, and anxiety are discussed. PMID:12452584

  17. Morphologic Analysis of the Temporomandibular Joint Between Patients With Facial Asymmetry and Asymptomatic Subjects by 2D and 3D Evaluation: A Preliminary Study.

    PubMed

    Zhang, Yuan-Li; Song, Jin-Lin; Xu, Xian-Chao; Zheng, Lei-Lei; Wang, Qing-Yuan; Fan, Yu-Bo; Liu, Zhan

    2016-03-01

    Signs and symptoms of temporomandibular joint (TMJ) dysfunction are commonly found in patients with facial asymmetry. Previous studies on the TMJ position have been limited to 2-dimensional (2D) radiographs, computed tomography (CT), or cone-beam computed tomography (CBCT). The purpose of this study was to compare the differences of TMJ position by using 2D CBCT and 3D model measurement methods. In addition, the differences of TMJ positions between patients with facial asymmetry and asymptomatic subjects were investigated.We prospectively recruited 5 patients (cases, mean age, 24.8 ± 2.9 years) diagnosed with facial asymmetry and 5 asymptomatic subjects (controls, mean age, 26 ± 1.2 years). The TMJ spaces, condylar and ramus angles were assessed by using 2D and 3D methods. The 3D models of mandible, maxilla, and teeth were reconstructed with the 3D image software. The variables in each group were assessed by t-test and the level of significance was 0.05.There was a significant difference in the horizontal condylar angle (HCA), coronal condylar angle (CCA), sagittal ramus angle (SRA), medial joint space (MJS), lateral joint space (LJS), superior joint space (SJS), and anterior joint space (AJS) measured in the 2D CBCT and in the 3D models (P < 0.05). The case group had significantly smaller SJS compared to the controls on both nondeviation side (P = 0.009) and deviation side (P = 0.004). In the case group, the nondeviation SRA was significantly larger than the deviation side (P = 0.009). There was no significant difference in the coronal condylar width (CCW) in either group. In addition, the anterior disc displacement (ADD) was more likely to occur on the deviated side in the case group.In conclusion, the 3D measurement method is more accurate and effective for clinicians to investigate the morphology of TMJ than the 2D method. PMID:27043669

  18. Morphologic Analysis of the Temporomandibular Joint Between Patients With Facial Asymmetry and Asymptomatic Subjects by 2D and 3D Evaluation

    PubMed Central

    Zhang, Yuan-Li; Song, Jin-Lin; Xu, Xian-Chao; Zheng, Lei-Lei; Wang, Qing-Yuan; Fan, Yu-Bo; Liu, Zhan

    2016-01-01

    Abstract Signs and symptoms of temporomandibular joint (TMJ) dysfunction are commonly found in patients with facial asymmetry. Previous studies on the TMJ position have been limited to 2-dimensional (2D) radiographs, computed tomography (CT), or cone-beam computed tomography (CBCT). The purpose of this study was to compare the differences of TMJ position by using 2D CBCT and 3D model measurement methods. In addition, the differences of TMJ positions between patients with facial asymmetry and asymptomatic subjects were investigated. We prospectively recruited 5 patients (cases, mean age, 24.8 ± 2.9 years) diagnosed with facial asymmetry and 5 asymptomatic subjects (controls, mean age, 26 ± 1.2 years). The TMJ spaces, condylar and ramus angles were assessed by using 2D and 3D methods. The 3D models of mandible, maxilla, and teeth were reconstructed with the 3D image software. The variables in each group were assessed by t-test and the level of significance was 0.05. There was a significant difference in the horizontal condylar angle (HCA), coronal condylar angle (CCA), sagittal ramus angle (SRA), medial joint space (MJS), lateral joint space (LJS), superior joint space (SJS), and anterior joint space (AJS) measured in the 2D CBCT and in the 3D models (P < 0.05). The case group had significantly smaller SJS compared to the controls on both nondeviation side (P = 0.009) and deviation side (P = 0.004). In the case group, the nondeviation SRA was significantly larger than the deviation side (P = 0.009). There was no significant difference in the coronal condylar width (CCW) in either group. In addition, the anterior disc displacement (ADD) was more likely to occur on the deviated side in the case group. In conclusion, the 3D measurement method is more accurate and effective for clinicians to investigate the morphology of TMJ than the 2D method. PMID:27043669

  19. Language and affective facial expression in children with perinatal stroke

    PubMed Central

    Lai, Philip T.; Reilly, Judy S.

    2015-01-01

    Children with perinatal stroke (PS) provide a unique opportunity to understand developing brain-behavior relations. Previous research has noted distinctive differences in behavioral sequelae between children with PS and adults with acquired stroke: children fare better, presumably due to the plasticity of the developing brain for adaptive reorganization. Whereas we are beginning to understand language development, we know little about another communicative domain, emotional expression. The current study investigates the use and integration of language and facial expression during an interview. As anticipated, the language performance of the five and six year old PS group is comparable to their typically developing (TD) peers, however, their affective profiles are distinctive: those with right hemisphere injury are less expressive with respect to affective language and affective facial expression than either those with left hemisphere injury or TD group. The two distinctive profiles for language and emotional expression in these children suggest gradients of neuroplasticity in the developing brain. PMID:26117314

  20. 3D spheroid cultures improve the metabolic gene expression profiles of HepaRG cells

    PubMed Central

    Takahashi, Yu; Hori, Yuji; Yamamoto, Tomohisa; Urashima, Toshiki; Ohara, Yasunori; Tanaka, Hideo

    2015-01-01

    3D (three-dimensional) cultures are considered to be an effective method for toxicological studies; however, little evidence has been reported whether 3D cultures have an impact on hepatocellular physiology regarding lipid or glucose metabolism. In the present study, we conducted physiological characterization of hepatoma cell lines HepG2 and HepaRG cells cultured in 3D conditions using a hanging drop method to verify the effect of culture environment on cellular responses. Apo (Apolipoprotein)B as well as albumin secretion was augmented by 3D cultures. Expression of genes related to not only drug, but also glucose and lipid metabolism were significantly enhanced in 3D cultured HepaRG spheroids. Furthermore, mRNA levels of CYP (cytochrome P450) enzymes following exposure to corresponding inducers increased under the 3D condition. These data suggest that this simple 3D culture system without any special biomaterials can improve liver-specific characteristics including lipid metabolism. Considering that the system enables high-throughput assay, it may become a powerful tool for compound screening concerning hepatocellular responses in order to identify potential drugs. PMID:26182370

  1. Specificity of Facial Expression Labeling Deficits in Childhood Psychopathology

    ERIC Educational Resources Information Center

    Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2007-01-01

    Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…

  2. Categorical Perception of Emotional Facial Expressions in Preschoolers

    ERIC Educational Resources Information Center

    Cheal, Jenna L.; Rutherford, M. D.

    2011-01-01

    Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…

  3. Categorical Representation of Facial Expressions in the Infant Brain

    ERIC Educational Resources Information Center

    Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.

    2009-01-01

    Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…

  4. Perceived Bias in the Facial Expressions of Television News Broadcasters.

    ERIC Educational Resources Information Center

    Friedman, Howard S.; And Others

    1980-01-01

    Studied the nuances of perceived media bias by examining the television reporting of the 1976 Presidential election campaign by comparing the adjudged positivity of the facial expressions of network anchorpersons as they named or referred to either of the two candidates. (JMF)

  5. Teachers' Perception Regarding Facial Expressions as an Effective Teaching Tool

    ERIC Educational Resources Information Center

    Butt, Muhammad Naeem; Iqbal, Mohammad

    2011-01-01

    The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…

  6. Further Evidence on Preschoolers' Interpretation of Facial Expressions.

    ERIC Educational Resources Information Center

    Bullock, Merry; Russell, James A.

    1985-01-01

    Assessed through two studies the organization and basis for preschool children's (n=240) and adults' (n=60) categorization of emotions. In one, children and adults chose facial expressions that exemplify emotion categories such as fear, anger, and happiness. In another they grouped emotions differing in arousal level or pleasure-displeasure…

  7. Facial Expressions in Context: Contributions to Infant Emotion Theory.

    ERIC Educational Resources Information Center

    Camras, Linda A.

    To make the point that infant emotions are more dynamic than suggested by Differential Emotions Theory, which maintains that infants show the same prototypical facial expressions for emotions as adults do, this paper explores two questions: (1) when infants experience an emotion, do they always show the corresponding prototypical facial…

  8. High expression of Rab3D predicts poor prognosis and associates with tumor progression in colorectal cancer.

    PubMed

    Luo, Yang; Ye, Guang-Yao; Qin, Shao-Lan; Mu, Yi-Fei; Zhang, Lei; Qi, Yang; Qiu, Yi-Er; Yu, Min-Hao; Zhong, Ming

    2016-06-01

    Rab3D belongs to Rab protein family. Previous reports showed that the expression of Rab3D was dysregulated in various types of cancer. Rab3D belongsRab3D belongs. However, little is known about the role of Rab3D in carcinogenesis and progression of colorectal cancer (CRC). Here, we first evaluated the expression of Rab3D in 32 fresh CRC and matched normal tissues and found Rab3D was dramatically increased in CRC tissues compared to normal tissues (p<0.001). Furthermore, immunochemistry was used to investigate Rab3D expression in 300CRC tissue specimens. The expression of Rab3D significantly positively correlated with the tumor size (p=0.041), CEA level (p=0.007), tumor classification (p=0.030), lymphatic metastasis (p<0.001), distant metastasis (p=0.013) and clinical stage (p=0.003). We also demonstrated that overall survival is poor in CRC patients with high expression of Rab3D (p<0.001). Finally, we showed that Rab3D activated Akt/GSK3β/Snail pathway and induced EMT process in colorectal cancer cells. In conclusion, this study establishes increased Rab3D expression is associated with invasiveness of CRC cells, and Rab3D expression status may serve as a reliable prognostic biomarker in CRC patients. PMID:27046094

  9. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    ERIC Educational Resources Information Center

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  10. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    ERIC Educational Resources Information Center

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  11. Interference between conscious and unconscious facial expression information.

    PubMed

    Ye, Xing; He, Sheng; Hu, Ying; Yu, Yong Qiang; Wang, Kai

    2014-01-01

    There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS) could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful) of a visible face in the presence of a nearby invisible face (with happy or fearful expression). Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information. PMID:25162153

  12. The Enfacement Illusion Is Not Affected by Negative Facial Expressions

    PubMed Central

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one’s own face to assimilate another person’s face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer’s motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant’s own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other’s face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing. PMID:26291532

  13. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. PMID:26876363

  14. Drug effects on responses to emotional facial expressions: recent findings

    PubMed Central

    Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet

    2016-01-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  15. Drug effects on responses to emotional facial expressions: recent findings.

    PubMed

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  16. Extreme Facial Expressions Classification Based on Reality Parameters

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  17. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  18. Facial morphology and children's categorization of facial expressions of emotions: a comparison between Asian and Caucasian faces.

    PubMed

    Gosselin, P; Larocque, C

    2000-09-01

    The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions. PMID:10971913

  19. Facial Expressions and the Evolution of the Speech Rhythm

    PubMed Central

    Ghazanfar, Asif A.; Takahashi, Daniel Y.

    2015-01-01

    In primates, different vocalizations are produced, at least in part, by making different facial expressions. Not surprisingly, humans, apes, and monkeys all recognize the correspondence between vocalizations and the facial postures associated with them. However, one major dissimilarity between monkey vocalizations and human speech is that, in the latter, the acoustic output and associated movements of the mouth are both rhythmic (in the 3- to 8-Hz range) and tightly correlated, whereas monkey vocalizations have a similar acoustic rhythmicity but lack the concommitant rhythmic facial motion. This raises the question of how we evolved from a presumptive ancestral acoustic-only vocal rhythm to the one that is audiovisual with improved perceptual sensitivity. According to one hypothesis, this bisensory speech rhythm evolved through the rhythmic facial expressions of ancestral primates. If this hypothesis has any validity, we expect that the extant nonhuman primates produce at least some facial expressions with a speech-like rhythm in the 3- to 8-Hz frequency range. Lip smacking, an affiliative signal observed in many genera of primates, satisfies this criterion. We review a series of studies using developmental, x-ray cineradiographic, EMG, and perceptual approaches with macaque monkeys producing lip smacks to further investigate this hypothesis. We then explore its putative neural basis and remark on important differences between lip smacking and speech production. Overall, the data support the hypothesis that lip smacking may have been an ancestral expression that was linked to vocal output to produce the original rhythmic audiovisual speech-like utterances in the human lineage. PMID:24456390

  20. Mining biological information from 3D short time-series gene expression data: the OPTricluster algorithm

    PubMed Central

    2012-01-01

    Background Nowadays, it is possible to collect expression levels of a set of genes from a set of biological samples during a series of time points. Such data have three dimensions: gene-sample-time (GST). Thus they are called 3D microarray gene expression data. To take advantage of the 3D data collected, and to fully understand the biological knowledge hidden in the GST data, novel subspace clustering algorithms have to be developed to effectively address the biological problem in the corresponding space. Results We developed a subspace clustering algorithm called Order Preserving Triclustering (OPTricluster), for 3D short time-series data mining. OPTricluster is able to identify 3D clusters with coherent evolution from a given 3D dataset using a combinatorial approach on the sample dimension, and the order preserving (OP) concept on the time dimension. The fusion of the two methodologies allows one to study similarities and differences between samples in terms of their temporal expression profile. OPTricluster has been successfully applied to four case studies: immune response in mice infected by malaria (Plasmodium chabaudi), systemic acquired resistance in Arabidopsis thaliana, similarities and differences between inner and outer cotyledon in Brassica napus during seed development, and to Brassica napus whole seed development. These studies showed that OPTricluster is robust to noise and is able to detect the similarities and differences between biological samples. Conclusions Our analysis showed that OPTricluster generally outperforms other well known clustering algorithms such as the TRICLUSTER, gTRICLUSTER and K-means; it is robust to noise and can effectively mine the biological knowledge hidden in the 3D short time-series gene expression data. PMID:22475802

  1. Recognition of facial expressions in obsessive-compulsive disorder.

    PubMed

    Corcoran, Kathleen M; Woody, Sheila R; Tolin, David F

    2008-01-01

    Sprengelmeyer et al. [Sprengelmeyer, R., Young, A. W., Pundt, I., Sprengelmeyer, A., Calder, A. J., Berrios, G., et al. (1997). Disgust implicated in obsessive-compulsive disorder. Proceedings of the Royal Society of London, 264, 1767-1773] found that patients with OCD showed severely impaired recognition of facial expressions of disgust. This result has potential to provide a unique window into the psychopathology of OCD, but several published attempts to replicate this finding have failed. The current study compared OCD patients to normal controls and panic disorder patients on ability to recognize facial expressions of negative emotions. Overall, the OCD patients were impaired in their ability to recognize disgust expressions, but only 33% of patients showed this deficit. These deficits were related to OCD symptom severity and general functioning, factors that may account for the inconsistent findings observed in different laboratories. PMID:17320346

  2. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces. PMID:24897907

  3. Facial expressions of emotion are not culturally universal

    PubMed Central

    Jack, Rachael E.; Garrod, Oliver G. B.; Yu, Hui; Caldara, Roberto; Schyns, Philippe G.

    2012-01-01

    Since Darwin’s seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843–850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind’s eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature–nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars. PMID:22509011

  4. Facial expressions of emotion are not culturally universal.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-01

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars. PMID:22509011

  5. Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain

    PubMed Central

    Harris, Richard J.; Young, Andrew W.; Andrews, Timothy J.

    2012-01-01

    Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue. PMID:23213218

  6. Differences in facial expressions of four universal emotions.

    PubMed

    Kohler, Christian G; Turner, Travis; Stolar, Neal M; Bilker, Warren B; Brensinger, Colleen M; Gur, Raquel E; Gur, Ruben C

    2004-10-30

    The facial action coding system (FACS) was used to examine recognition rates in 105 healthy young men and women who viewed 128 facial expressions of posed and evoked happy, sad, angry and fearful emotions in color photographs balanced for gender and ethnicity of poser. Categorical analyses determined the specificity of individual action units for each emotion. Relationships between recognition rates for different emotions and action units were evaluated using a logistic regression model. Each emotion could be identified by a group of action units, characteristic to the emotion and distinct from other emotions. Characteristic happy expressions comprised raised inner eyebrows, tightened lower eyelid, raised cheeks, upper lip raised and lip corners turned upward. Recognition of happy faces was associated with cheek raise, lid tightening and outer brow raise. Characteristic sad expressions comprised furrowed eyebrow, opened mouth with upper lip being raised, lip corners stretched and turned down, and chin pulled up. Only brow lower and chin raise were associated with sad recognition. Characteristic anger expressions comprised lowered eyebrows, eyes wide open with tightened lower lid, lips exposing teeth and stretched lip corners. Recognition of angry faces was associated with lowered eyebrows, upper lid raise and lower lip depression. Characteristic fear expressions comprised eyes wide open, furrowed and raised eyebrows and stretched mouth. Recognition of fearful faces was most highly associated with upper lip raise and nostril dilation, although both occurred infrequently, and with inner brow raise and widened eyes. Comparisons are made with previous studies that used different facial stimuli. PMID:15541780

  7. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. PMID:19186915

  8. Optical computer recognition of facial expressions associated with stress induced by performance demands.

    PubMed

    Dinges, David F; Rider, Robert L; Dorrian, Jillian; McGlinchey, Eleanor L; Rogers, Naomi L; Cizman, Ziga; Goldenstein, Siome K; Vogler, Christian; Venkataraman, Sundara; Metaxas, Dimitris N

    2005-06-01

    Application of computer vision to track changes in human facial expressions during long-duration spaceflight may be a useful way to unobtrusively detect the presence of stress during critical operations. To develop such an approach, we applied optical computer recognition (OCR) algorithms for detecting facial changes during performance while people experienced both low- and high-stressor performance demands. Workload and social feedback were used to vary performance stress in 60 healthy adults (29 men, 31 women; mean age 30 yr). High-stressor scenarios involved more difficult performance tasks, negative social feedback, and greater time pressure relative to low workload scenarios. Stress reactions were tracked using self-report ratings, salivary cortisol, and heart rate. Subjects also completed personality, mood, and alexithymia questionnaires. To bootstrap development of the OCR algorithm, we had a human observer, blind to stressor condition, identify the expressive elements of the face of people undergoing high- vs. low-stressor performance. Different sets of videos of subjects' faces during performance conditions were used for OCR algorithm training. Subjective ratings of stress, task difficulty, effort required, frustration, and negative mood were significantly increased during high-stressor performance bouts relative to low-stressor bouts (all p < 0.01). The OCR algorithm was refined to provide robust 3-d tracking of facial expressions during head movement. Movements of eyebrows and asymmetries in the mouth were extracted. These parameters are being used in a Hidden Markov model to identify high- and low-stressor conditions. Preliminary results suggest that an OCR algorithm using mouth and eyebrow regions has the potential to discriminate high- from low-stressor performance bouts in 75-88% of subjects. The validity of the workload paradigm to induce differential levels of stress in facial expressions was established. The paradigm also provided the basic stress

  9. Emotional Representation in Facial Expression and Script: A Comparison between Normal and Autistic Children

    ERIC Educational Resources Information Center

    Balconi, Michela; Carrera, Alba

    2007-01-01

    The paper explored conceptual and lexical skills with regard to emotional correlates of facial stimuli and scripts. In two different experimental phases normal and autistic children observed six facial expressions of emotions (happiness, anger, fear, sadness, surprise, and disgust) and six emotional scripts (contextualized facial expressions). In…

  10. Americans and Palestinians judge spontaneous facial expressions of emotion.

    PubMed

    Kayyal, Mary H; Russell, James A

    2013-10-01

    The claim that certain emotions are universally recognized from facial expressions is based primarily on the study of expressions that were posed. The current study was of spontaneous facial expressions shown by aborigines in Papua New Guinea (Ekman, 1980); 17 faces claimed to convey one (or, in the case of blends, two) basic emotions and five faces claimed to show other universal feelings. For each face, participants rated the degree to which each of the 12 predicted emotions or feelings was conveyed. The modal choice for English-speaking Americans (n = 60), English-speaking Palestinians (n = 60), and Arabic-speaking Palestinians (n = 44) was the predicted label for only 4, 5, and 4, respectively, of the 17 faces for basic emotions, and for only 2, 2, and 2, respectively, of the 5 faces for other feelings. Observers endorsed the predicted emotion or feeling moderately often (65%, 55%, and 44%), but also denied it moderately often (35%, 45%, and 56%). They also endorsed more than one (or, for blends, two) label(s) in each face-on average, 2.3, 2.3, and 1.5 of basic emotions and 2.6, 2.2, and 1.5 of other feelings. There were both similarities and differences across culture and language, but the emotional meaning of a facial expression is not well captured by the predicted label(s) or, indeed, by any single label. PMID:23795587

  11. Facial Expressions of Emotion: Are Angry Faces Detected More Efficiently?

    PubMed Central

    Fox, Elaine; Lester, Victoria; Russo, Riccardo; Bowles, R.J.; Pichler, Alessio; Dutton, Kevin

    2007-01-01

    The rapid detection of facial expressions of anger or threat has obvious adaptive value. In this study, we examined the efficiency of facial processing by means of a visual search task. Participants searched displays of schematic faces and were required to determine whether the faces displayed were all the same or whether one was different. Four main results were found: (1) When displays contained the same faces, people were slower in detecting the absence of a discrepant face when the faces displayed angry (or sad/angry) rather than happy expressions. (2) When displays contained a discrepant face people were faster in detecting this when the discrepant face displayed an angry rather than a happy expression. (3) Neither of these patterns for same and different displays was apparent when face displays were inverted, or when just the mouth was presented in isolation. (4) The search slopes for angry targets were significantly lower than for happy targets. These results suggest that detection of angry facial expressions is fast and efficient, although does not “pop-out” in the traditional sense. PMID:17401453

  12. Neural processing of dynamic emotional facial expressions in psychopaths.

    PubMed

    Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A

    2014-02-01

    Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488

  13. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    PubMed Central

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  14. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    PubMed

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia. PMID:26017202

  15. Production and discrimination of facial expressions by preschool children.

    PubMed

    Field, T M; Walden, T A

    1982-10-01

    Production and discrimination of the 8 basic facial expressions were investigated among 34 3-5-year-old preschool children. The children's productions were elicited and videotaped under 4 different prompt conditions (imitation of photographs of children's facial expressions, imitation of those in front of a mirror, imitation of those when given labels for the expressions, and when given only labels). Adults' "guesses" of the children's productions as well as the children's guesses of their own expressions on videotape were more accurate for the happy than afraid or angry expressions and for those expressions elicited during the imitation conditions. Greater accuracy of guessing by the adult than the child suggests that the children's productions were superior to their discriminations, although these skills appeared to be related. Children's production skills were also related to sociometric ratings by their peers and expressivity ratings by their teachers. These were not related to the child's age and only weakly related to the child's expressivity during classroom free-play observations. PMID:7140433

  16. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. PMID:26562915

  17. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  18. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  19. Modulation of incentivized dishonesty by disgust facial expressions.

    PubMed

    Lim, Julian; Ho, Paul M; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    Disgust modulates moral decisions involving harming others. We recently specified that this effect is bi-directionally modulated by individual sensitivity to disgust. Here, we show that this effect generalizes to the moral domain of honesty and extends to outcomes with real-world impact. We employed a dice-rolling task in which participants were incentivized to dishonestly report outcomes to increase their potential final monetary payoff. Disgust or control facial expressions were presented subliminally on each trial. Our results reveal that the disgust facial expressions altered honest reporting as a bi-directional function moderated by individual sensitivity. Combining these data with those from prior experiments revealed that the effect of disgust presentation on both harm judgments and honesty could be accounted for by the same bidirectional function, with no significant effect of domain. This clearly demonstrates that disgust facial expressions produce the same modulation of moral judgments across different moral foundations (harm and honesty). Our results suggest strong overlap in the cognitive/neural processes of moral judgments across moral foundations, and provide a framework for further studies to specify the integration of emotional information in moral decision making. PMID:26257599

  20. Modulation of incentivized dishonesty by disgust facial expressions

    PubMed Central

    Lim, Julian; Ho, Paul M.; Mullette-Gillman, O'Dhaniel A.

    2015-01-01

    Disgust modulates moral decisions involving harming others. We recently specified that this effect is bi-directionally modulated by individual sensitivity to disgust. Here, we show that this effect generalizes to the moral domain of honesty and extends to outcomes with real-world impact. We employed a dice-rolling task in which participants were incentivized to dishonestly report outcomes to increase their potential final monetary payoff. Disgust or control facial expressions were presented subliminally on each trial. Our results reveal that the disgust facial expressions altered honest reporting as a bi-directional function moderated by individual sensitivity. Combining these data with those from prior experiments revealed that the effect of disgust presentation on both harm judgments and honesty could be accounted for by the same bidirectional function, with no significant effect of domain. This clearly demonstrates that disgust facial expressions produce the same modulation of moral judgments across different moral foundations (harm and honesty). Our results suggest strong overlap in the cognitive/neural processes of moral judgments across moral foundations, and provide a framework for further studies to specify the integration of emotional information in moral decision making. PMID:26257599

  1. Facial expression recognition based on improved DAGSVM

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Cui, Ye; Zhang, Yi

    2014-11-01

    For the cumulative error problem because of randomization sequence of traditional DAGSVM(Directed Acyclic Graph Support Vector Machine) classification, this paper presents an improved DAGSVM expression recognition method. The method uses the distance of class and the standard deviation as the measure of the classer, which minimize the error rate of the upper structure of the classification. At the same time, this paper uses the method which combines discrete cosine transform (Discrete Cosine Transform, DCT) with Local Binary Pattern(Local Binary Pattern - LBP) ,to extract expression feature and be the input to improve the DAGSVM classifier for recognition. Experimental results show that compared with other multi-class support vector machine method, improved DAGSVM classifier can achieve higher recognition rate. And when it's used at the platform of the intelligent wheelchair, experiments show that the method has a better robustness.

  2. Adaptation of video game UVW mapping to 3D visualization of gene expression patterns

    NASA Astrophysics Data System (ADS)

    Vize, Peter D.; Gerth, Victor E.

    2007-01-01

    Analysis of gene expression patterns within an organism plays a critical role in associating genes with biological processes in both health and disease. During embryonic development the analysis and comparison of different gene expression patterns allows biologists to identify candidate genes that may regulate the formation of normal tissues and organs and to search for genes associated with congenital diseases. No two individual embryos, or organs, are exactly the same shape or size so comparing spatial gene expression in one embryo to that in another is difficult. We will present our efforts in comparing gene expression data collected using both volumetric and projection approaches. Volumetric data is highly accurate but difficult to process and compare. Projection methods use UV mapping to align texture maps to standardized spatial frameworks. This approach is less accurate but is very rapid and requires very little processing. We have built a database of over 180 3D models depicting gene expression patterns mapped onto the surface of spline based embryo models. Gene expression data in different models can easily be compared to determine common regions of activity. Visualization software, both Java and OpenGL optimized for viewing 3D gene expression data will also be demonstrated.

  3. Covert processing of facial expressions by people with Williams syndrome.

    PubMed

    Levy, Yonata; Pluber, Hadas; Bentin, Shlomo

    2011-01-01

    Although individuals with Williams Syndrome (WS) are empathic and sociable and perform relatively well on face recognition tasks, they perform poorly on tasks of facial expression recognition. The current study sought to investigate this seeming inconsistency. Participants were tested on a Garner-type matching paradigm in which identities and expressions were manipulated simultaneously as the relevant or irrelevant dimensions. Performance of people with WS on the expression-matching task was poor and relied primarily on facilitation afforded by congruent identities. Performance on the identity matching task came close to the level of performance of matched controls and was significantly facilitated by congruent expressions. We discuss potential accounts for the discrepant processing of expressions in the task-relevant (overt) and task-irrelevant (covert) conditions, expanding on the inherently semantic-conceptual nature of overt expression matching and its dependence on general cognitive level. PMID:19853248

  4. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Data Analysis and Visualization and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  5. Body Actions Change the Appearance of Facial Expressions

    PubMed Central

    Fantoni, Carlo; Gerbino, Walter

    2014-01-01

    Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience. PMID:25251882

  6. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  7. Lateralization for dynamic facial expressions in human superior temporal sulcus.

    PubMed

    De Winter, François-Laurent; Zhu, Qi; Van den Stock, Jan; Nelissen, Koen; Peeters, Ronald; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2015-02-01

    Most face processing studies in humans show stronger activation in the right compared to the left hemisphere. Evidence is largely based on studies with static stimuli focusing on the fusiform face area (FFA). Hence, the pattern of lateralization for dynamic faces is less clear. Furthermore, it is unclear whether this property is common to human and non-human primates due to predisposing processing strategies in the right hemisphere or that alternatively left sided specialization for language in humans could be the driving force behind this phenomenon. We aimed to address both issues by studying lateralization for dynamic facial expressions in monkeys and humans. Therefore, we conducted an event-related fMRI experiment in three macaques and twenty right handed humans. We presented human and monkey dynamic facial expressions (chewing and fear) as well as scrambled versions to both species. We studied lateralization in independently defined face-responsive and face-selective regions by calculating a weighted lateralization index (LIwm) using a bootstrapping method. In order to examine if lateralization in humans is related to language, we performed a separate fMRI experiment in ten human volunteers including a 'speech' expression (one syllable non-word) and its scrambled version. Both within face-responsive and selective regions, we found consistent lateralization for dynamic faces (chewing and fear) versus scrambled versions in the right human posterior superior temporal sulcus (pSTS), but not in FFA nor in ventral temporal cortex. Conversely, in monkeys no consistent pattern of lateralization for dynamic facial expressions was observed. Finally, LIwms based on the contrast between different types of dynamic facial expressions (relative to scrambled versions) revealed left-sided lateralization in human pSTS for speech-related expressions compared to chewing and emotional expressions. To conclude, we found consistent laterality effects in human posterior STS but not

  8. The effect of sad facial expressions on weight judgment

    PubMed Central

    Weston, Trent D.; Hass, Norah C.; Lim, Seung-Lark

    2015-01-01

    Although the body weight evaluation (e.g., normal or overweight) of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes toward obese persons. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged. PMID:25914669

  9. Facial expression training optimises viewing strategy in children and adults.

    PubMed

    Pollux, Petra M J; Hall, Sophie; Guo, Kun

    2014-01-01

    This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions. PMID:25144680

  10. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  11. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion. PMID:25438312

  12. Fear Modulates Visual Awareness Similarly for Facial and Bodily Expressions

    PubMed Central

    Stienen, Bernard M. C.; de Gelder, Beatrice

    2011-01-01

    Background: Social interaction depends on a multitude of signals carrying information about the emotional state of others. But the relative importance of facial and bodily signals is still poorly understood. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. In order to better understand the relative contribution of affective signals from the face only or from the whole body we performed two experiments using binocular rivalry. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. Method: In the first experiment we directly contrasted fearful, angry, and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 s and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies. Results: Taken together the two experiments show that there is no clear bias toward either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear. PMID:22125517

  13. Speed and accuracy of facial expression classification in avoidant personality disorder: a preliminary study.

    PubMed

    Rosenthal, M Zachary; Kim, Kwanguk; Herr, Nathaniel R; Smoski, Moria J; Cheavens, Jennifer S; Lynch, Thomas R; Kosson, David S

    2011-10-01

    The aim of this preliminary study was to examine whether individuals with avoidant personality disorder (APD) could be characterized by deficits in the classification of dynamically presented facial emotional expressions. Using a community sample of adults with APD (n = 17) and non-APD controls (n = 16), speed and accuracy of facial emotional expression recognition was investigated in a task that morphs facial expressions from neutral to prototypical expressions (Multi-Morph Facial Affect Recognition Task; Blair, Colledge, Murray, & Mitchell, 2001). Results indicated that individuals with APD were significantly more likely than controls to make errors when classifying fully expressed fear. However, no differences were found between groups in the speed to correctly classify facial emotional expressions. The findings are some of the first to investigate facial emotional processing in a sample of individuals with APD and point to an underlying deficit in processing social cues that may be involved in the maintenance of APD. PMID:22448805

  14. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease

    PubMed Central

    Livingstone, Steven R.; Vezer, Esztella; McGarry, Lucy M.; Lang, Anthony E.; Russo, Frank A.

    2016-01-01

    Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed). Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD. PMID:27375505

  15. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    NASA Astrophysics Data System (ADS)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  16. Errors in identifying and expressing emotion in facial expressions, voices, and postures unique to social anxiety.

    PubMed

    Walker, Amy S; Nowicki, Stephen; Jones, Jeffrey; Heimann, Lisa

    2011-01-01

    The purpose of the present study was to see if 7-10-year-old socially anxious children (n = 26) made systematic errors in identifying and sending emotions in facial expressions, paralanguage, and postures as compared with the more random errors of children who were inattentive-hyperactive (n = 21). It was found that socially anxious children made more errors in identifying anger and fear in children's facial expressions and anger in adults' postures and in expressing anger in their own facial expressions than did their inattentive-hyperactive peers. Results suggest that there may be systematic difficulties specifically in visual nonverbal emotion communication that contribute to the personal and social difficulties socially anxious children experience. PMID:21902007

  17. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  18. Shared Gaussian Process Latent Variable Models for Handling Ambiguous Facial Expressions

    NASA Astrophysics Data System (ADS)

    Ek, Carl Henrik; Jaeckel, Peter; Campbell, Neill; Lawrence, Neil D.; Melhuish, Chris

    2009-03-01

    Despite the fact, that, in reality facial expressions occur as a result of muscle actions, facial expression models assume an inverse functional relationship, which makes muscles action be the result of facial expressions. Clearly, facial expression should be expressed as a function of muscle action, the other way around as previously suggested. Furthermore, a human facial expression space and the robots actuator space have common features. However, there are also features that the one or the other does not have. This suggests modelling shared and non-shared feature variance separately. To this end we propose Shared Gaussian Process Latent Variable Models (Shared GP-LVM) for models of facial expressions, which assume shared and private features between an input and output space. In this work, we are focusing on the detection of ambiguities within data sets of facial behaviour. We suggest ways of modelling and mapping of facial motion from a representation of human facial expressions to a robot's actuator space. We aim to compensate for ambiguities caused by interference of global with local head motion and the constrained nature of Active Appearance Models, used for tracking.

  19. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. PMID:25531122

  20. Social phobics do not misinterpret facial expression of emotion.

    PubMed

    Philippot, Pierre; Douilliez, Céline

    2005-05-01

    Attentional biases in the processing of threatening facial expressions in social anxiety are well documented. It is generally assumed that these attentional biases originate in an evaluative bias: socially threatening information would be evaluated more negatively by socially anxious individuals. However, three studies have failed to evidence a negative evaluative bias in the processing of emotional facial expression (EFE) in socially anxious individuals. These studies however suffer from several methodological limitations that the present study has attempted to overcome. Twenty-one out-patients diagnosed with generalized social phobia have been compared to 20 out-patients diagnosed with another anxiety disorder and with 39 normal controls matched for gender, age and level of education. They had to decode on seven emotion intensity scales a set of 40 EFE whose intensity and emotional nature were manipulated. Although sufficient statistical power was ensured, no differences among groups could be found in terms of decoding accuracy, attributed emotion intensity, or reported difficulty of the task. Based on these findings as well as on other evidences, we propose that, if they exist, evaluative biases in social anxiety should be implicit and automatic and that they might be determined by the relevance of the stimulus to the person's concern rather than by the stimulus valence. The implications of these findings for the interpersonal processes involved in social phobia are discussed. PMID:15865918

  1. Fetal facial expression in response to intravaginal music emission

    PubMed Central

    García-Faura, Álex; Prats-Galino, Alberto

    2015-01-01

    This study compared fetal response to musical stimuli applied intravaginally (intravaginal music [IVM]) with application via emitters placed on the mother’s abdomen (abdominal music [ABM]). Responses were quantified by recording facial movements identified on 3D/4D ultrasound. One hundred and six normal pregnancies between 14 and 39 weeks of gestation were randomized to 3D/4D ultrasound with: (a) ABM with standard headphones (flute monody at 98.6 dB); (b) IVM with a specially designed device emitting the same monody at 53.7 dB; or (c) intravaginal vibration (IVV; 125 Hz) at 68 dB with the same device. Facial movements were quantified at baseline, during stimulation, and for 5 minutes after stimulation was discontinued. In fetuses at a gestational age of >16 weeks, IVM-elicited mouthing (MT) and tongue expulsion (TE) in 86.7% and 46.6% of fetuses, respectively, with significant differences when compared with ABM and IVV (p = 0.002 and p = 0.004, respectively). There were no changes from baseline in ABM and IVV. TE occurred ≥5 times in 5 minutes in 13.3% with IVM. IVM was related with higher occurrence of MT (odds ratio = 10.980; 95% confidence interval = 3.105–47.546) and TE (odds ratio = 10.943; 95% confidence interval = 2.568–77.037). The frequency of TE with IVM increased significantly with gestational age (p = 0.024). Fetuses at 16–39 weeks of gestation respond to intravaginally emitted music with repetitive MT and TE movements not observed with ABM or IVV. Our findings suggest that neural pathways participating in the auditory–motor system are developed as early as gestational week 16. These findings might contribute to diagnostic methods for prenatal hearing screening, and research into fetal neurological stimulation. PMID:26539240

  2. Visualisation of BioPAX Networks using BioLayout Express 3D

    PubMed Central

    Wright, Derek W.; Angus, Tim; Enright, Anton J.; Freeman, Tom C.

    2014-01-01

    BioLayout Express 3D is a network analysis tool designed for the visualisation and analysis of graphs derived from biological data. It has proved to be powerful in the analysis of gene expression data, biological pathways and in a range of other applications. In version 3.2 of the tool we have introduced the ability to import, merge and display pathways and protein interaction networks available in the BioPAX Level 3 standard exchange format. A graphical interface allows users to search for pathways or interaction data stored in the Pathway Commons database. Queries using either gene/protein or pathway names are made via the cPath2 client and users can also define the source and/or species of information that they wish to examine. Data matching a query are listed and individual records may be viewed in isolation or merged using an ‘Advanced’ query tab. A visualisation scheme has been defined by mapping BioPAX entity types to a range of glyphs. Graphs of these data can be viewed and explored within BioLayout as 2D or 3D graph layouts, where they can be edited and/or exported for visualisation and editing within other tools. PMID:25949802

  3. Visualisation of BioPAX Networks using BioLayout Express (3D).

    PubMed

    Wright, Derek W; Angus, Tim; Enright, Anton J; Freeman, Tom C

    2014-01-01

    BioLayout Express (3D) is a network analysis tool designed for the visualisation and analysis of graphs derived from biological data. It has proved to be powerful in the analysis of gene expression data, biological pathways and in a range of other applications. In version 3.2 of the tool we have introduced the ability to import, merge and display pathways and protein interaction networks available in the BioPAX Level 3 standard exchange format. A graphical interface allows users to search for pathways or interaction data stored in the Pathway Commons database. Queries using either gene/protein or pathway names are made via the cPath2 client and users can also define the source and/or species of information that they wish to examine. Data matching a query are listed and individual records may be viewed in isolation or merged using an 'Advanced' query tab. A visualisation scheme has been defined by mapping BioPAX entity types to a range of glyphs. Graphs of these data can be viewed and explored within BioLayout as 2D or 3D graph layouts, where they can be edited and/or exported for visualisation and editing within other tools. PMID:25949802

  4. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  5. Facial expressions in common marmosets (Callithrix jacchus) and their use by conspecifics.

    PubMed

    Kemp, Caralyn; Kaplan, Gisela

    2013-09-01

    Facial expressions have been studied mainly in chimpanzees and have been shown to be important social signals. In platyrrhine and strepsirrhine primates, it has been doubted that facial expressions are differentiated enough, or the species socially capable enough, for facial expressions to be part of their communication system. However, in a series of experiments presenting olfactory, auditory and visual stimuli, we found that common marmosets (Callithrix jacchus) displayed an unexpected variety of facial expressions. Especially, olfactory and auditory stimuli elicited obvious facial displays (such as disgust), some of which are reported here for the first time. We asked whether specific facial responses to food and predator-related stimuli might act as social signals to conspecifics. We recorded two contrasting facial expressions (fear and pleasure) as separate sets of video clips and then presented these to cage mates of those marmosets shown in the images, while tempting the subject with food. Results show that the expression of a fearful face on screen significantly reduced time spent near the food bowl compared to the duration when a face showing pleasure was screened. This responsiveness to a cage mate's facial expressions suggests that the evolution of facial signals may have occurred much earlier in primate evolution than had been thought. PMID:23412667

  6. Dynamic properties influence the perception of facial expressions.

    PubMed

    Kamachi, Miyuki; Bruce, Vicki; Mukaida, Shigeru; Gyoba, Jiro; Yoshikawa, Sakiko; Akamatsu, Shigeru

    2013-01-01

    Two experiments were conducted to investigate the role played by dynamic information in identifying facial expressions of emotion. Dynamic expression sequences were created by generating and displaying morph sequences which changed the face from neutral to a peak expression in different numbers of intervening intermediate stages, to create fast (6 frames), medium (26 frames), and slow (101 frames) sequences. In experiment 1, participants were asked to describe what the person shown in each sequence was feeling. Sadness was more accurately identified when slow sequences were shown. Happiness, and to some extent surprise, was better from faster sequences, while anger was most accurately detected from the sequences of medium pace. In experiment 2 we used an intensity-rating task and static images as well as dynamic ones to examine whether effects were due to total time of the displays or to the speed of sequence. Accuracies of expression judgments were derived from the rated intensities and the results were similar to those of experiment 1 for angry and sad expressions (surprised and happy were close to ceiling). Moreover, the effect of display time was found only for dynamic expressions and not for static ones, suggesting that it was speed, not time, which was responsible for these effects. These results suggest that representations of basic expressions of emotion encode information about dynamic as well as static properties. PMID:24601038

  7. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    NASA Astrophysics Data System (ADS)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  8. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  9. Facial expression analysis for estimating patient's emotional states in RPMS.

    PubMed

    Hosseini, H Gholam; Krechowec, Z

    2004-01-01

    Currently, a range of remote patient monitoring systems (RPMS) are being developed to care for patients at home rather than in the costly hospital environment. These systems allow remote monitoring by health professionals with minimum medical intervention to take place. However, they are still not as effective as one-on-one human interaction. The face and its features can convey patient cognitive and emotional states faster than electrical signals and facial expression can be considered as one of the most powerful features of RPMS. We present image pre-processing and enhancement techniques for face recognition applications. In particular, the project is aimed to improve the performance of RPMS, taking into account the cognitive and emotional state of patients by developing a more human like RPMS. The techniques use the value of grey scale of the images and extract efficient facial features. The extracted information is fed into input layer of an artificial neural network for face identification. On the other hand, the colour images are used by the recognition algorithm to eliminate nonskin coloured background and reduce further processing time. A data base of real images is used for testing the algorithms. PMID:17271985

  10. Facial expression of positive emotions in individuals with eating disorders.

    PubMed

    Dapelo, Marcela M; Hart, Sharon; Hale, Christiane; Morris, Robin; Lynch, Thomas R; Tchanturia, Kate

    2015-11-30

    A large body of research has associated Eating Disorders with difficulties in socio-emotional functioning and it has been argued that they may serve to maintain the illness. This study aimed to explore facial expressions of positive emotions in individuals with Anorexia Nervosa (AN) and Bulimia Nervosa (BN) compared to healthy controls (HC), through an examination of the Duchenne smile (DS), which has been associated with feelings of enjoyment, amusement and happiness (Ekman et al., 1990). Sixty participants (AN=20; BN=20; HC=20) were videotaped while watching a humorous film clip. The duration and intensity of DS were subsequently analyzed using the facial action coding system (FACS) (Ekman and Friesen, 2003). Participants with AN displayed DS for shorter durations than BN and HC participants, and their DS had lower intensity. In the clinical groups, lower duration and intensity of DS were associated with lower BMI, and use of psychotropic medication. The study is the first to explore DS in people with eating disorders, providing further evidence of difficulties in the socio-emotional domain in people with AN. PMID:26323166

  11. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  12. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  13. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  14. Evaluating Posed and Evoked Facial Expressions of Emotion from Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Faso, Daniel J.; Sasson, Noah J.; Pinkham, Amy E.

    2015-01-01

    Though many studies have examined facial affect perception by individuals with autism spectrum disorder (ASD), little research has investigated how facial expressivity in ASD is perceived by others. Here, naïve female observers (n = 38) judged the intensity, naturalness and emotional category of expressions produced by adults with ASD (n = 6) and…

  15. Compound facial expressions of emotion: from basic research to clinical applications

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2015-01-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions. PMID:26869845

  16. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  17. Mu desynchronization during observation and execution of facial expressions in 30-month-old children.

    PubMed

    Rayson, Holly; Bonaiuto, James John; Ferrari, Pier Francesco; Murray, Lynne

    2016-06-01

    Simulation theories propose that observing another's facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS) is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG) to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development. PMID:27261926

  18. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    ERIC Educational Resources Information Center

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  19. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    ERIC Educational Resources Information Center

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  20. The Relationship between Processing Facial Identity and Emotional Expression in 8-Month-Old Infants

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Jovanovic, Bianca

    2010-01-01

    In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight-month-old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a…

  1. Can Healthy Fetuses Show Facial Expressions of “Pain” or “Distress”?

    PubMed Central

    Reissland, Nadja; Francis, Brian; Mason, James

    2013-01-01

    Background With advances of research on fetal behavioural development, the question of whether we can identify fetal facial expressions and determine their developmental progression, takes on greater importance. In this study we investigate longitudinally the increasing complexity of combinations of facial movements from 24 to 36 weeks gestation in a sample of healthy fetuses using frame-by-frame coding of 4-D ultrasound scans. The primary aim was to examine whether these complex facial movements coalesce into a recognisable facial expression of pain/distress. Methodology/Findings Fifteen fetuses (8 girls, 7 boys) were observed four times in the second and third trimester of pregnancy. Fetuses showed significant progress towards more complex facial expressions as gestational age increased. Statistical analysis of the facial movements making up a specific facial configuration namely “pain/distress” also demonstrates that this facial expression becomes significantly more complete as the fetus matures. Conclusions/Significance The study shows that one can determine the normal progression of fetal facial movements. Furthermore, our results suggest that healthy fetuses progress towards an increasingly complete pain/distress expression as they mature. We argue that this is an adaptive process which is beneficial to the fetus postnatally and has the potential to identify normal versus abnormal developmental pathways. PMID:23755245

  2. Selective attention and facial expression recognition in patients with Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Serrano, Juan M; Martín, Pilar

    2014-06-01

    Parkinson's disease (PD) has been associated with facial expression recognition difficulties. However, this impairment could be secondary to the one produced in other cognitive processes involved in recognition, such as selective attention. This study investigates the influence of two selective attention components (inhibition and visual search) on facial expression recognition in PD. We compared facial expression and non-emotional stimuli recognition abilities of 51 patients and 51 healthy controls, by means of an adapted Stroop task, and by "The Face in the Crowd" paradigm, which assess Inhibition and Visual Search abilities, respectively. Patients scored worse than controls in both tasks with facial expressions, but not with the other nonemotional stimuli, indicating specific emotional recognition impairment, not dependent on selective attention abilities. This should be taken into account in patients' neuropsychological assessment given the relevance of emotional facial expression for social communication in everyday settings. PMID:24760956

  3. Reconstruction of sedimentary environments of J2-4 reservoir rocks of the Lovin oil field by facial analysis and 3D simulation

    NASA Astrophysics Data System (ADS)

    Iagudin, R.; Minibaev, N.

    2012-04-01

    The reconstruction of accumulations' conditions of sand bodies and determination of paleogeographical conditions is the basis for 3D modeling of lithologically screened oil and gas reservoirs. The reconstruction of accumulations' conditions is implemented by lithologic-and-facies analysis. The facial types are determined during the analysis of deposits of oil reservoir and then mapped within the reservoir's space. The facies type is an integral characteristic. It is determined on the basis of a large number of research methods such as the processing and analysis of core samples, seismic and well log data. Mapping of reservoirs' facies types allow estimating variability of important for exploration of oil deposits parameters such as reservoir properties, productivity, distribution of effective thickness, etc. The facies types can be mapped as an individual geological unit and used in 3D geological modeling. Subject of facial analysis was sediments of J2-4 reservoir of Lovin oil field (Western Lovin structure) which were accumulated in the Jurassic period. Based on lithologic-and-facies analysis of core material from 6 wells (25 samples), including studies on the grain size measurements, analysis of sediment's structure and core description, the metering of magnetic susceptibility of sediments, facies types of the J2-4 reservoir were identified. The lithotype A is characterized by sand and silt structure, small nodules in the halo of pyrite oxidation, indicated the presence of magnetite. This lithotype belongs to conditions of river-bed facies. The lithotype B have a silty structure, interlayer of coal and traces of bioturbation. This lithotype corresponds to the conditions of sand bars of the floodplain. The lithotype C is characterized by silty-clay structure, single siderite nodules and the remnants of the fauna. This is referring to bog part of the floodplain. After analyzing the well log data of 25 wells of Lovin oil field by Muromtsev methodology distribution

  4. A spatiotemporal feature-based approach for facial expression recognition from depth video

    NASA Astrophysics Data System (ADS)

    Uddin, Md. Zia

    2015-07-01

    In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.

  5. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  6. Depressive and elative mood inductions as a function of exaggerated versus contradictory facial expressions.

    PubMed

    Riccelli, P T; Antila, C E; Dale, J A; Klions, H L

    1989-04-01

    Two studies concerned the relation between facial expression cognitive induction of mood and perception of mood in women undergraduates. In Exp. 1, 20 subjects were randomly assigned to a group who were instructed in exaggerated facial expressions (Demand Group) and 20 subjects were randomly assigned to a group who were not instructed (Nondemand Group). All subjects completed a modified Velten (1968) elation- and depression-induction sequence. Ratings of depression on the Multiple Affect Adjective Checklist increased during the depression condition and decreased during the elation condition. Subjects made more facial expressions in the Demand Group than the Nondemand Group from electromyogram measures of the zygomatic and corrugator muscles and from corresponding action unit measures from visual scoring using the Facial Action Scoring System. Subjects who were instructed in the Demand Group rated their depression as more severe during the depression slides than the other group. No such effect was noted during the elation condition. In Exp. 2, 16 women were randomly assigned to a group who were instructed in facial expressions contradictory to those expected on the depression and elation tasks (Contradictory Expression Group). Another 16 women were randomly assigned to a group who were given no instructions about facial expressions (Nondemand Group). All subjects completed the depression- and elation-induction sequence mentioned in Exp. 1. No differences were reported between groups on the ratings of depression (MAACL) for the depression-induction or for the elation-induction but both groups rated depression higher after the depression condition and lower after the elation condition. Electromyographic and facial action scores verified that subjects in the Contradictory Expression Group were making the requested contradictory facial expressions during the mood-induction sequences. It was concluded that the primary influence on emotion came from the cognitive mood

  7. Facial feedback affects valence judgments of dynamic and static emotional expressions.

    PubMed

    Hyniewska, Sylwia; Sato, Wataru

    2015-01-01

    The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions. PMID:25852608

  8. Psychopathic traits affect the visual exploration of facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. PMID:27016126

  9. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  10. Paedomorphic Facial Expressions Give Dogs a Selective Advantage

    PubMed Central

    Waller, Bridget M.; Peirce, Kate; Caeiro, Cátia C.; Scheider, Linda; Burrows, Anne M.; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  11. Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders.

    PubMed

    Brewer, Rebecca; Biotti, Federica; Catmur, Caroline; Press, Clare; Happé, Francesca; Cook, Richard; Bird, Geoffrey

    2016-02-01

    The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants' ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD. Autism Res 2016, 9: 262-271. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. PMID:26053037

  12. Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders

    PubMed Central

    Biotti, Federica; Catmur, Caroline; Press, Clare; Happé, Francesca; Cook, Richard; Bird, Geoffrey

    2015-01-01

    The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants’ ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD. Autism Res 2016, 9: 262–271. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. PMID:26053037

  13. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  14. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  15. The amygdalo-motor pathways and the control of facial expressions

    PubMed Central

    Gothard, Katalin M.

    2013-01-01

    Facial expressions reflect decisions about the perceived meaning of social stimuli and the expected socio-emotional outcome of responding (or not) with a reciprocating expression. The decision to produce a facial expression emerges from the joint activity of a network of structures that include the amygdala and multiple, interconnected cortical and subcortical motor areas. Reciprocal transformations between these sensory and motor signals give rise to distinct brain states that promote, or impede the production of facial expressions. The muscles of the upper and lower face are controlled by anatomically distinct motor areas. Facial expressions engage to a different extent the lower and upper face and thus require distinct patterns of neural activity distributed across multiple facial motor areas in ventrolateral frontal cortex, the supplementary motor area, and two areas in the midcingulate cortex. The distributed nature of the decision manifests in the joint activation of multiple motor areas that initiate the production of facial expression. Concomitantly multiple areas, including the amygdala, monitor ongoing overt behaviors (the expression itself) and the covert, autonomic responses that accompany emotional expressions. As the production of facial expressions is brought into the framework of formal decision making, an important challenge will be to incorporate autonomic and visceral states into decisions that govern the receiving-emitting cycle of social signals. PMID:24678289

  16. Judgment of facial expressions of emotion as a function of exposure time.

    PubMed

    Kirouac, G; Doré, F Y

    1984-08-01

    The purpose of this experiment was to study the accuracy of judgment of facial expressions of emotions that were displayed for very brief exposure times. Twenty university students were shown facial stimuli that were presented for durations ranging from 10 to 50 msec. The data showed that accuracy of judgment reached a fairly high level even at very brief exposure times and that human observers are especially competent to process very rapid changes in facial appearance. PMID:6493929

  17. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults

    PubMed Central

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415

  18. Shaped 3D Singular Spectrum Analysis for Quantifying Gene Expression, with Application to the Early Zebrafish Embryo

    PubMed Central

    Shlemov, Alex; Golyandina, Nina; Holloway, David; Spirov, Alexander

    2015-01-01

    Recent progress in microscopy technologies, biological markers, and automated processing methods is making possible the development of gene expression atlases at cellular-level resolution over whole embryos. Raw data on gene expression is usually very noisy. This noise comes from both experimental (technical/methodological) and true biological sources (from stochastic biochemical processes). In addition, the cells or nuclei being imaged are irregularly arranged in 3D space. This makes the processing, extraction, and study of expression signals and intrinsic biological noise a serious challenge for 3D data, requiring new computational approaches. Here, we present a new approach for studying gene expression in nuclei located in a thick layer around a spherical surface. The method includes depth equalization on the sphere, flattening, interpolation to a regular grid, pattern extraction by Shaped 3D singular spectrum analysis (SSA), and interpolation back to original nuclear positions. The approach is demonstrated on several examples of gene expression in the zebrafish egg (a model system in vertebrate development). The method is tested on several different data geometries (e.g., nuclear positions) and different forms of gene expression patterns. Fully 3D datasets for developmental gene expression are becoming increasingly available; we discuss the prospects of applying 3D-SSA to data processing and analysis in this growing field. PMID:26495320

  19. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions

    PubMed Central

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions. PMID:26090223

  20. Facial Muscle Coordination in Monkeys During Rhythmic Facial Expressions and Ingestive Movements

    PubMed Central

    Shepherd, Stephen V.; Lanzilotto, Marco; Ghazanfar, Asif A.

    2012-01-01

    Evolutionary hypotheses regarding the origins of communication signals generally, and primate orofacial communication signals in particular, suggest that these signals derive by ritualization of noncommunicative behaviors, notably including ingestive behaviors such as chewing and nursing. These theories are appealing in part because of the prominent periodicities in both types of behavior. Despite their intuitive appeal, however, there are little or no data with which to evaluate these theories because the coordination of muscles innervated by the facial nucleus has not been carefully compared between communicative and ingestive movements. Such data are especially crucial for reconciling neurophysiological assumptions regarding facial motor control in communication and ingestion. We here address this gap by contrasting the coordination of facial muscles during different types of rhythmic orofacial behavior in macaque monkeys, finding that the perioral muscles innervated by the facial nucleus are rhythmically coordinated during lipsmacks and that this coordination appears distinct from that observed during ingestion. PMID:22553017

  1. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions

    PubMed Central

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions. PMID:26578938

  2. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. PMID:26908317

  3. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  4. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    ERIC Educational Resources Information Center

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  5. Facial Expression as an Indicator of Pain in Critically Ill Intubated Adults During Endotracheal Suctioning

    PubMed Central

    Rahu, Mamoona Arif; Grap, Mary Jo; Cohn, Jeffrey F.; Munro, Cindy L.; Lyon, Debra E.; Sessler, Curtis N.

    2013-01-01

    Background Facial expression is often used to evaluate pain in noncommunicative critically ill patients. Objectives To describe facial behavior during endotracheal suctioning, determine facial behaviors that characterize the pain response, and describe the effect of patient factors on facial behavior during pain response. Methods Fifty noncommunicative patients receiving mechanical ventilation were video recorded during 2 phases (rest and endotracheal suctioning). Pain ratings were gathered by using the Behavioral Pain Scale. Facial behaviors were coded by using the Facial Action Coding System for 30 seconds for each phase. Results Fourteen facial actions were associated more with endotracheal suctioning than with rest (z = 5.78; P < .001). The sum of intensity of the 14 actions correlated with total mean scores on the Behavioral Pain Scale (ρ = 0.71; P < .001) and with the facial expression component of the scale (ρ = 0.67; P < .001) during suctioning. In stepwise multivariate analysis, 5 pain-relevant facial behaviors (brow raiser, brow lower, nose wrinkling, head turned right, and head turned up) accounted for 71% of the variance (adjusted R2 = 0.682; P < .001) in pain response. The sum of intensity of the 5 actions correlated with total mean scores on the behavioral scale (ρ = 0.72; P < .001) and with the facial expression component of that scale (ρ = 0.61; P < .001) during suctioning. Patient factors had no association with pain intensity scores. Conclusions Upper facial expressions are most frequently activated during pain response in noncommunicative critically ill patients and might be a valid alternative to self-report ratings. PMID:23996421

  6. Standardized mood induction with happy and sad facial expressions.

    PubMed

    Schneider, F; Gur, R C; Gur, R E; Muenz, L R

    1994-01-01

    The feasibility of applying ecologically valid and socially relevant emotional stimuli in a standardized fashion to obtain reliable mood changes in healthy subjects was examined. The stimuli consisted of happy and sad facial expressions varying in intensity. Two mood-induction procedures (happy and sad, each consisting of 40 slides) were administered to 24 young healthy subjects, who were instructed to look at each slide (self-paced) and try to feel the happy or sad mood expressed by the person in the picture. On an emotional self-rating scale, subjects rated themselves as relatively happier during the happy mood-induction condition and as relatively sadder during the sad mood-induction condition. Conversely, they reported that they were less happy during the sad mood-induction condition and less sad during the happy mood-induction condition. The effects were generalized to positive and negative affect as measured by the Positive and Negative Affect Scale. The intraindividual variability in the effect was very small. In a retest study after 1 month, the mood-induction effects showed good stability over time. The results encourage the use of this mood-induction procedure as a neurobehavioral probe in physiologic neuroimaging studies for investigating the neural substrates of emotional experience. PMID:8197269

  7. Cloning, Expression and 3D Structure Prediction of Chitinase from Chitinolyticbacter meiyuanensis SYBC-H1

    PubMed Central

    Hao, Zhikui; Wu, Hangui; Yang, Meiling; Chen, Jianjun; Xi, Limin; Zhao, Weijie; Yu, Jialin; Liu, Jiayang; Liao, Xiangru; Huang, Qingguo

    2016-01-01

    Two CHI genes from Chitinolyticbacter meiyuanensis SYBC-H1 encoding chitinases were identified and their protein 3D structures were predicted. According to the amino acid sequence alignment, CHI1 gene encoding 166 aa had a structural domain similar to the GH18 type II chitinase, and CHI2 gene encoding 383 aa had the same catalytic domain as the glycoside hydrolase family 19 chitinase. In this study, CHI2 chitinase were expressed in Escherichia coli BL21 cells, and this protein was purified by ammonium sulfate precipitation, DEAE-cellulose, and Sephadex G-100 chromatography. Optimal activity of CHI2 chitinase occurred at a temperature of 40 °C and a pH of 6.5. The presence of metal ions Fe3+, Fe2+, and Zn2+ inhibited CHI2 chitinase activity, while Na+ and K+ promoted its activity. Furthermore, the presence of EGTA, EDTA, and β-mercaptoethanol significantly increased the stability of CHI2 chitinase. The CHI2 chitinase was active with p-NP-GlcNAc, with the Km and Vm values of 23.0 µmol/L and 9.1 mM/min at a temperature of 37 °C, respectively. Additionally, the CHI2 chitinase was characterized as an N-acetyl glucosaminidase based on the hydrolysate from chitin. Overall, our results demonstrated CHI2 chitinase with remarkable biochemical properties is suitable for bioconversion of chitin waste. PMID:27240345

  8. Cloning, Expression and 3D Structure Prediction of Chitinase from Chitinolyticbacter meiyuanensis SYBC-H1.

    PubMed

    Hao, Zhikui; Wu, Hangui; Yang, Meiling; Chen, Jianjun; Xi, Limin; Zhao, Weijie; Yu, Jialin; Liu, Jiayang; Liao, Xiangru; Huang, Qingguo

    2016-01-01

    Two CHI genes from Chitinolyticbacter meiyuanensis SYBC-H1 encoding chitinases were identified and their protein 3D structures were predicted. According to the amino acid sequence alignment, CHI1 gene encoding 166 aa had a structural domain similar to the GH18 type II chitinase, and CHI2 gene encoding 383 aa had the same catalytic domain as the glycoside hydrolase family 19 chitinase. In this study, CHI2 chitinase were expressed in Escherichia coli BL21 cells, and this protein was purified by ammonium sulfate precipitation, DEAE-cellulose, and Sephadex G-100 chromatography. Optimal activity of CHI2 chitinase occurred at a temperature of 40 °C and a pH of 6.5. The presence of metal ions Fe(3+), Fe(2+), and Zn(2+) inhibited CHI2 chitinase activity, while Na⁺ and K⁺ promoted its activity. Furthermore, the presence of EGTA, EDTA, and β-mercaptoethanol significantly increased the stability of CHI2 chitinase. The CHI2 chitinase was active with p-NP-GlcNAc, with the Km and Vm values of 23.0 µmol/L and 9.1 mM/min at a temperature of 37 °C, respectively. Additionally, the CHI2 chitinase was characterized as an N-acetyl glucosaminidase based on the hydrolysate from chitin. Overall, our results demonstrated CHI2 chitinase with remarkable biochemical properties is suitable for bioconversion of chitin waste. PMID:27240345

  9. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  10. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    PubMed

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602

  11. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-01-01

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach. PMID:25808772

  12. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. PMID:27318957

  13. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. PMID:26480247

  14. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  15. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  16. Neural mechanism of unconscious perception of surprised facial expression.

    PubMed

    Duan, Xujun; Dai, Qian; Gong, Qiyong; Chen, Huafu

    2010-08-01

    Previous functional neuroimaging studies have uncovered partly separable neural substrates for perceiving different facial expressions presented below the level of conscious awareness. However, as one of the six basic emotions, the neural mechanism of unconsciously perceiving surprised faces has not yet been investigated. Using a backward masking procedure, we studied the neural activities in response to surprised faces presented below the threshold of conscious visual perception by means of functional magnetic resonance imaging (fMRI). Eighteen healthy adults were scanned while viewing surprised faces, which presented for 33 ms and immediately "masked" by a neutral face for 467 ms. As a control, they viewed masked happy or neutral faces as well. In comparison to both control conditions, masked surprised faces yielded significantly greater activation in the parahippocampal gyrus and fusiform gyrus, which associated previously with novelty detection. In the present study, automatic activation of these areas to masked surprised faces was investigated as a function of individual differences in the ability of identifying and differentiating one's emotions, as assessed by the 20-item Toronto Alexithymia Scale (TAS-20). The correlation results showed that, the subscale, Difficulty Identifying Feelings, was negatively correlated with the neural response of these areas to masked surprised faces, which suggest that decreased activation magnitude in specific brain regions may reflect increased difficulties in recognizing one's emotions in everyday life. Additionally, we confirmed activation of the right amygdala and right thalamus to the masked surprised faces, which was previously proved to be involved in the unconscious emotional perception system. PMID:20398771

  17. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. PMID:26915331

  18. Genome wide expression after different doses of irradiation of a three-dimensional (3D) model of oral mucosal

    PubMed Central

    Lambros, Maria P.; DeSalvo, Michael K.; Mulamalla, Hari Chandana; Moreno, Jonathan; Kondapalli, Lavanya

    2015-01-01

    We evaluated a three-dimensional (3D) human oral cell culture that consisted of two types of cells, oral keratinocytes and fibroblasts as a model of oral mucositis which is a debilitating adverse effect of chemotherapy and radiation treatment. The 3D cell culture model was irradiated with 12 or 2 Gy, and total RNA was collected 6 h after irradiation to compare global gene expression profiles via microarray analysis. Here we provide detailed methods and analysis on these microarray data, which have been deposited in Gene Expression Omnibus (GEO): GSE62395. PMID:26981390

  19. Genome wide expression after different doses of irradiation of a three-dimensional (3D) model of oral mucosal.

    PubMed

    Lambros, Maria P; DeSalvo, Michael K; Mulamalla, Hari Chandana; Moreno, Jonathan; Kondapalli, Lavanya

    2016-03-01

    We evaluated a three-dimensional (3D) human oral cell culture that consisted of two types of cells, oral keratinocytes and fibroblasts as a model of oral mucositis which is a debilitating adverse effect of chemotherapy and radiation treatment. The 3D cell culture model was irradiated with 12 or 2 Gy, and total RNA was collected 6 h after irradiation to compare global gene expression profiles via microarray analysis. Here we provide detailed methods and analysis on these microarray data, which have been deposited in Gene Expression Omnibus (GEO): GSE62395. PMID:26981390

  20. Dysfunctional facial emotional expression and comprehension in a patient with corticobasal degeneration.

    PubMed

    Kluger, Benzi M; Heilman, Kenneth M

    2007-06-01

    Patients with corticobasal degeneration (CBD) frequently develop orofacial apraxia but little is known about CBD's influence on emotional facial processing. We describe a patient who developed a facial apraxia including an impaired ability to voluntarily generate facial expressions with relative sparing of spontaneous emotional faces. Her ability to interpret the facial expressions of others was also severely impaired. Despite these deficits, the patient had normal affect and normal speech, including expressive and receptive emotional prosody. As patients with corticobasal degeneration are known to manifest both orofacial apraxia and visuospatial dysfunction this patient's expressive and receptive deficits may be independent manifestations of the same underlying disease process. Alternatively, these functions may share a common neuroanatomic substrate that degenerates with CBD. PMID:17786775

  1. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  2. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information. PMID:23371506

  3. 3D domain swapping causes extensive multimerisation of human interleukin-10 when expressed in planta.

    PubMed

    Westerhof, Lotte B; Wilbers, Ruud H P; Roosien, Jan; van de Velde, Jan; Goverse, Aska; Bakker, Jaap; Schots, Arjen

    2012-01-01

    Heterologous expression platforms of biopharmaceutical proteins have been significantly improved over the last decade. Further improvement can be established by examining the intrinsic properties of proteins. Interleukin-10 (IL-10) is an anti-inflammatory cytokine with a short half-life that plays an important role in re-establishing immune homeostasis. This homodimeric protein of 36 kDa has significant therapeutic potential to treat inflammatory and autoimmune diseases. In this study we show that the major production bottleneck of human IL-10 is not protein instability as previously suggested, but extensive multimerisation due to its intrinsic 3D domain swapping characteristic. Extensive multimerisation of human IL-10 could be visualised as granules in planta. On the other hand, mouse IL-10 hardly multimerised, which could be largely attributed to its glycosylation. By introducing a short glycine-serine-linker between the fourth and fifth alpha helix of human IL-10 a stable monomeric form of IL-10 (hIL-10(mono)) was created that no longer multimerised and increased yield up to 20-fold. However, hIL-10(mono) no longer had the ability to reduce pro-inflammatory cytokine secretion from lipopolysaccharide-stimulated macrophages. Forcing dimerisation restored biological activity. This was achieved by fusing human IL-10(mono) to the C-terminal end of constant domains 2 and 3 of human immunoglobulin A (Fcα), a natural dimer. Stable dimeric forms of IL-10, like Fcα-IL-10, may not only be a better format for improved production, but also a more suitable format for medical applications. PMID:23049703

  4. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  5. Patterning of Facial Expressions among Terminal Cancer Patients.

    ERIC Educational Resources Information Center

    Antonoff, Steven R.; Spilka, Bernard

    1985-01-01

    Evaluated the possible significance of nonverbal communication in 49 terminal cancer patients using the Facial Affect Scoring Technique. Results showed fear was highest in early stages of illness. Sadness increased regularly from the early to late phase. (JAC)

  6. Analysis of Gene Expression in 3D Spheroids Highlights a Survival Role for ASS1 in Mesothelioma

    PubMed Central

    Barbone, Dario; Van Dam, Loes; Follo, Carlo; Jithesh, Puthen V.; Zhang, Shu-Dong; Richards, William G.; Bueno, Raphael; Fennell, Dean A.; Broaddus, V. Courtney

    2016-01-01

    To investigate the underlying causes of chemoresistance in malignant pleural mesothelioma, we have studied mesothelioma cell lines as 3D spheroids, which acquire increased chemoresistance compared to 2D monolayers. We asked whether the gene expression of 3D spheroids would reveal mechanisms of resistance. To address this, we measured gene expression of three mesothelioma cell lines, M28, REN and VAMT, grown as 2D monolayers and 3D spheroids. A total of 209 genes were differentially expressed in common by the three cell lines in 3D (138 upregulated and 71 downregulated), although a clear resistance pathway was not apparent. We then compared the list of 3D genes with two publicly available datasets of gene expression of 56 pleural mesotheliomas compared to normal tissues. Interestingly, only three genes were increased in both 3D spheroids and human tumors: argininosuccinate synthase 1 (ASS1), annexin A4 (ANXA4) and major vault protein (MVP); of these, ASS1 was the only consistently upregulated of the three genes by qRT-PCR. To measure ASS1 protein expression, we stained 2 sets of tissue microarrays (TMA): one with 88 pleural mesothelioma samples and the other with additional 88 pleural mesotheliomas paired with matched normal tissues. Of the 176 tumors represented on the two TMAs, ASS1 was expressed in 87 (50%; staining greater than 1 up to 3+). For the paired samples, ASS1 expression in mesothelioma was significantly greater than in the normal tissues. Reduction of ASS1 expression by siRNA significantly sensitized mesothelioma spheroids to the pro-apoptotic effects of bortezomib and of cisplatin plus pemetrexed. Although mesothelioma is considered by many to be an ASS1-deficient tumor, our results show that ASS1 is elevated at the mRNA and protein levels in mesothelioma 3D spheroids and in human pleural mesotheliomas. We also have uncovered a survival role for ASS1, which may be amenable to targeting to undermine mesothelioma multicellular resistance. PMID:26982031

  7. On the temporal organization of facial identity and expression analysis: Inferences from event-related brain potentials.

    PubMed

    Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R

    2010-12-01

    In the present study, behavioral and electrophysiological markers of information processing-the lateralized readiness potential, the N170, and the P300-were recorded in order to assess the functional and temporal organization of facial identity and expression processing. A two-choice go/no-go task was used in which facial expression (happy vs. angry) determined response hand and response execution depended on facial familiarity (familiar vs. unfamiliar). The duration of facial identity and expression processing was manipulated in separate experiments. Together, the present findings in measures of overt and covert response activation indicate that facial identity is analyzed in parallel with, and typically somewhat faster than, facial expression. These data support a parallel model of face perception that assumes partial output from facial identity and expression processes to motor activation processes. PMID:21098811

  8. Electromyographic responses to emotional facial expressions in 6-7 year olds with autism spectrum disorders.

    PubMed

    Deschamps, P K H; Coppes, L; Kenemans, J L; Schutter, D J L G; Matthys, W

    2015-02-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old children with ASD and twenty-seven typically developing children. Even though results did not show differences in facial mimicry between children with ASD and typically developing children, impairment in social responsiveness was significantly associated with reduced fear mimicry in children with ASD. These findings demonstrate normal mimicry in children with ASD as compared to healthy controls, but that in children with ASD the degree of impairments in social responsiveness may be associated with reduced sensitivity to distress signals. PMID:23888357

  9. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    PubMed

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  10. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns

    PubMed Central

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi

    2016-01-01

    Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  11. Facial expression to emotional stimuli in non-psychotic disorders: A systematic review and meta-analysis.

    PubMed

    Davies, H; Wolz, I; Leppanen, J; Fernandez-Aranda, F; Schmidt, U; Tchanturia, K

    2016-05-01

    Facial expression of emotion is crucial to social interaction and emotion regulation; therefore, altered facial expressivity can be a contributing factor in social isolation, difficulties with emotion regulation and a target for therapy. This article provides a systematic review and meta-analysis of the literature on automatic emotional facial expression in people with non-psychotic disorders compared to healthy comparison groups. Studies in the review used an emotionally salient visual induction method, and reported on automatic facial expression in response to congruent stimuli. A total of 39 studies show alterations in emotional facial expression across all included disorders, except anxiety disorders. In depression, decreases in facial expression are mainly evident for positive affect. In eating disorders, a meta-analysis showed decreased facial expressivity in response to positive and negative stimuli. Studies in autism partially support generally decreased facial expressivity in this group. The data included in this review point towards decreased facial emotional expressivity in individuals with different non-psychotic disorders. This is the first review to synthesise facial expression studies across clinical disorders. PMID:26915928

  12. Inversion effects reveal dissociations in facial expression of emotion, gender, and object processing

    PubMed Central

    Pallett, Pamela M.; Meng, Ming

    2015-01-01

    To distinguish between high-level visual processing mechanisms, the degree to which holistic processing is involved in facial identity, facial expression, and object perception is often examined through measuring inversion effects. However, participants may be biased by different experimental paradigms to use more or less holistic processing. Here we take a novel psychophysical approach to directly compare human face and object processing in the same experiment, with face processing broken into two categories: variant properties and invariant properties as they were tested using facial expressions of emotion and gender, respectively. Specifically, participants completed two different perceptual discrimination tasks. One involved making judgments of stimulus similarity and the other tested the ability to detect differences between stimuli. Each task was completed for both upright and inverted stimuli. Results show significant inversion effects for the detection of differences in facial expressions of emotion and gender, but not for objects. More interestingly, participants exhibited a selective inversion deficit when making similarity judgments between different facial expressions of emotion, but not for gender or objects. These results suggest a three-way dissociation between facial expression of emotion, gender, and object processing. PMID:26283983

  13. Features classification using support vector machine for a facial expression recognition system

    NASA Astrophysics Data System (ADS)

    Patil, Rajesh A.; Sahula, Vineet; Mandal, Atanendu S.

    2012-10-01

    A methodology for automatic facial expression recognition in image sequences is proposed, which makes use of the Candide wire frame model and an active appearance algorithm for tracking, and support vector machine (SVM) for classification. A face is detected automatically from the given image sequence and by adapting the Candide wire frame model properly on the first frame of face image sequence, facial features in the subsequent frames are tracked using an active appearance algorithm. The algorithm adapts the Candide wire frame model to the face in each of the frames and then automatically tracks the grid in consecutive video frames over time. We require that first frame of the image sequence corresponds to the neutral facial expression, while the last frame of the image sequence corresponds to greatest intensity of facial expression. The geometrical displacement of Candide wire frame nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to the SVM, which classify the facial expression into one of the classes viz happy, surprise, sadness, anger, disgust, and fear.

  14. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.

    2014-01-01

    Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859

  15. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  16. Development and Standardization of Extended ChaeLee Korean Facial Expressions of Emotions

    PubMed Central

    Lee, Kyoung-Uk; Kim, JiEun; Yeon, Bora; Kim, Seung-Hwan

    2013-01-01

    Objective In recent years there has been an enormous increase of neuroscience research using the facial expressions of emotion. This has led to a need for ethnically specific facial expressions data, due to differences of facial emotion processing among different ethnicities. Methods Fifty professional actors were asked to pose with each of the following facial expressions in turn: happiness, sadness, fear, anger, disgust, surprise, and neutral. A total of 283 facial pictures of 40 actors were selected to be included in the validation study. Facial expression emotion identification was performed in a validation study by 104 healthy raters who provided emotion labeling, valence ratings, and arousal ratings. Results A total of 259 images of 37 actors were selected for inclusion in the Extended ChaeLee Korean Facial Expressions of Emotions tool, based on the analysis of results. In these images, the actors' mean age was 38±11.1 years (range 26-60 years), with 16 (43.2%) males and 21 (56.8%) females. The consistency varied by emotion type, showing the highest for happiness (95.5%) and the lowest for fear (49.0%). The mean scores for the valence ratings ranged from 4.0 (happiness) to 1.9 (sadness, anger, and disgust). The mean scores for the arousal ratings ranged from 3.7 (anger and fear) to 2.5 (neutral). Conclusion We obtained facial expressions from individuals of Korean ethnicity and performed a study to validate them. Our results provide a tool for the affective neurosciences which could be used for the investigation of mechanisms of emotion processing in healthy individuals as well as in patients with various psychiatric disorders. PMID:23798964

  17. Recognition of facial expressions of emotion in adults with Down syndrome.

    PubMed

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-08-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA) viewed photographs and video clips of facial expressions of: happy, sad, mad, and scared. The odds of accurate identification of facial expressions were 2.7 times greater for video clips compared with photographs. The odds of accurate identification of expressions of mad and scared were greater for video clips compared with photographs. The odds of accurate identification of expressions of mad and sad were greater for adults but did not differ between adults with DS and children. Adults with DS demonstrated the lowest accuracy for recognition of scared. These results support the importance of motion cues in evaluating the social skills of individuals with DS. PMID:22304421

  18. Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials.

    PubMed

    Recio, Guillermo; Schacht, Annekathrin; Sommer, Werner

    2014-02-01

    Emotional facial expressions usually arise dynamically from a neutral expression. Yet, most previous research focused on static images. The present study investigated basic aspects of processing dynamic facial expressions. In two experiments, we presented short videos of facial expressions of six basic emotions and non-emotional facial movements emerging at variable and fixed rise times, attaining different intensity levels. In event-related brain potentials (ERP), effects of emotion but also for non-emotional movements appeared as early posterior negativity (EPN) between 200 and 350ms, suggesting an overall facilitation of early visual encoding for all facial movements. These EPN effects were emotion-unspecific. In contrast, relative to happiness and neutral expressions, negative emotional expressions elicited larger late positive ERP components (LPCs), indicating a more elaborate processing. Both EPN and LPC amplitudes increased with expression intensity. Effects of emotion and intensity were additive, indicating that intensity (understood as the degree of motion) increases the impact of emotional expressions but not its quality. These processes can be driven by all basic emotions, and there is little emotion-specificity even when statistical power is considerable (N (Experiment 2)=102). PMID:24361701

  19. Interpreting text messages with graphic facial expression by deaf and hearing people.

    PubMed

    Saegusa, Chihiro; Namatame, Miki; Watanabe, Katsumi

    2015-01-01

    In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says "yes" with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., "no" tended to mean "no" irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people. PMID:25883582

  20. Interpreting text messages with graphic facial expression by deaf and hearing people

    PubMed Central

    Saegusa, Chihiro; Namatame, Miki; Watanabe, Katsumi

    2015-01-01

    In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., “no” tended to mean “no” irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people. PMID:25883582

  1. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  2. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  3. Influence of Intensity on Children's Sensitivity to Happy, Sad, and Fearful Facial Expressions

    ERIC Educational Resources Information Center

    Gao, Xiaoqing; Maurer, Daphne

    2009-01-01

    Most previous studies investigating children's ability to recognize facial expressions used only intense exemplars. Here we compared the sensitivity of 5-, 7-, and 10-year-olds with that of adults (n = 24 per age group) for less intense expressions of happiness, sadness, and fear. The developmental patterns differed across expressions. For…

  4. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  5. Does Facial Expressivity Count? How Typically Developing Children Respond Initially to Children with Autism

    ERIC Educational Resources Information Center

    Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-01-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…

  6. Facial expression and pain in the critically ill non-communicative patient: State of science review

    PubMed Central

    Arif-Rahu, Mamoona; Grap, Mary Jo

    2013-01-01

    Summary The aim of this review is to analyse the evidence related to the relationship between facial expression and pain assessment tools in the critically ill non-communicative patients. Pain assessment is a significant challenge in critically ill adults, especially those who are unable to communicate their pain level. During critical illness, many factors alter verbal communication with patients including tracheal intubation, reduced level of consciousness and administration of sedation and analgesia. The first step in providing adequate pain relief is using a systematic, consistent assessment and documentation of pain. However, no single tool is universally accepted for use in these patients. A common component of behavioural pain tools is evaluation of facial behaviours. Although use of facial expression is an important behavioural measure of pain intensity, there are inconsistencies in defining descriptors of facial behaviour. Therefore, it is important to understand facial expression in non-communicative critically ill patients experiencing pain to assist in the development of concise descriptors to enhance pain evaluation and management. This paper will provide a comprehensive review of the current state of science in the study of facial expression and its application in pain assessment tools. PMID:21051234

  7. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2014-01-01

    Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes toward diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n = 39) and a high (n = 40) 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes toward the eyes and toward the mouth could be identified. We found that the low vs. the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation toward diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features. PMID:25100964

  8. Facial expression recognition based on fused Feature of PCA and LDP

    NASA Astrophysics Data System (ADS)

    Yi, Zhang; Mao, Hou-lin; Luo, Yuan

    2014-11-01

    Facial expression recognition is an important part of the study in man-machine interaction. Principal component analysis (PCA) is an extraction method based on statistical features which were extracted from the global grayscale features of the whole image .But the grayscale global features are environmentally sensitive. In order to recognize facial expression accurately, a fused method of principal component analysis and local direction pattern (LDP) is introduced in this paper. First, PCA extracts the global features of the whole grayscale image; LDP extracts the local grayscale texture features of the mouth and eyes region, which contribute most to facial expression recognition, to complement the global grayscale features of PCA. Then we adopt Support Vector Machine (SVM) classifier for expression classification. Experimental results demonstrate that this method can classify different expressions more effectively and get higher recognition rate compared with the traditional method.

  9. Young Infants Match Facial and Vocal Emotional Expressions of Other Infants

    PubMed Central

    Vaillant-Molina, Mariana; Bahrick, Lorraine E.; Flom, Ross

    2013-01-01

    Research has demonstrated that infants recognize emotional expressions of adults in the first half-year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5- and 5-month-old infants heard a series of infant vocal expressions (positive and negative affect) along with side-by-side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5-month-olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5-month-olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face-voice synchrony, temporal or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice. PMID:24302853

  10. Comparative Analysis of 3D Expression Patterns of Transcription Factor Genes and Digit Fate Maps in the Developing Chick Wing

    PubMed Central

    Delgado, Irene; Bain, Andrew; Planzer, Thorsten; Sherman, Adrian; Sang, Helen; Tickle, Cheryll

    2011-01-01

    Hoxd13, Tbx2, Tbx3, Sall1 and Sall3 genes are candidates for encoding antero-posterior positional values in the developing chick wing and specifying digit identity. In order to build up a detailed profile of gene expression patterns in cell lineages that give rise to each of the digits over time, we compared 3 dimensional (3D) expression patterns of these genes during wing development and related them to digit fate maps. 3D gene expression data at stages 21, 24 and 27 spanning early bud to digital plate formation, captured from in situ hybridisation whole mounts using Optical Projection Tomography (OPT) were mapped to reference wing bud models. Grafts of wing bud tissue from GFP chicken embryos were used to fate map regions of the wing bud giving rise to each digit; 3D images of the grafts were captured using OPT and mapped on to the same models. Computational analysis of the combined computerised data revealed that Tbx2 and Tbx3 are expressed in digit 3 and 4 progenitors at all stages, consistent with encoding stable antero-posterior positional values established in the early bud; Hoxd13 and Sall1 expression is more dynamic, being associated with posterior digit 3 and 4 progenitors in the early bud but later becoming associated with anterior digit 2 progenitors in the digital plate. Sox9 expression in digit condensations lies within domains of digit progenitors defined by fate mapping; digit 3 condensations express Hoxd13 and Sall1, digit 4 condensations Hoxd13, Tbx3 and to a lesser extent Tbx2. Sall3 is only transiently expressed in digit 3 progenitors at stage 24 together with Sall1 and Hoxd13; then becomes excluded from the digital plate. These dynamic patterns of expression suggest that these genes may play different roles in digit identity either together or in combination at different stages including the digit condensation stage. PMID:21526123

  11. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms. PMID:26212348

  12. Common cues to emotion in the dynamic facial expressions of speech and song

    PubMed Central

    Livingstone, Steven R.; Thompson, William F.; Wanderley, Marcelo M.; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. PMID:25424388

  13. Facial EMG Responses to Emotional Expressions Are Related to Emotion Perception Ability

    PubMed Central

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective. PMID:24489647

  14. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition

    PubMed Central

    de Gelder, Beatrice; Huis in ‘t Veld, Elisabeth M. J.; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST. PMID:26579004

  15. Learning the spherical harmonic features for 3-D face recognition.

    PubMed

    Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming

    2013-03-01

    In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method. PMID:23060332

  16. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    ERIC Educational Resources Information Center

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  17. Muscles of facial expression in the chimpanzee (Pan troglodytes): descriptive, comparative and phylogenetic contexts

    PubMed Central

    Burrows, Anne M; Waller, Bridget M; Parr, Lisa A; Bonar, Christopher J

    2006-01-01

    Facial expressions are a critical mode of non-vocal communication for many mammals, particularly non-human primates. Although chimpanzees (Pan troglodytes) have an elaborate repertoire of facial signals, little is known about the facial expression (i.e. mimetic) musculature underlying these movements, especially when compared with some other catarrhines. Here we present a detailed description of the facial muscles of the chimpanzee, framed in comparative and phylogenetic contexts, through the dissection of preserved faces using a novel approach. The arrangement and appearance of muscles were noted and compared with previous studies of chimpanzees and with prosimians, cercopithecoids and humans. The results showed 23 mimetic muscles in P. troglodytes, including a thin sphincter colli muscle, reported previously only in adult prosimians, a bi-layered zygomaticus major muscle and a distinct risorius muscle. The presence of these muscles in such definition supports previous studies that describe an elaborate and highly graded facial communication system in this species that remains qualitatively different from that reported for other non-human primate species. In addition, there are minimal anatomical differences between chimpanzees and humans, contrary to conclusions from previous studies. These results amplify the importance of understanding facial musculature in primate taxa, which may hold great taxonomic value. PMID:16441560

  18. Production of Emotional Facial Expressions in European American, Japanese, and Chinese Infants.

    ERIC Educational Resources Information Center

    Camras, Linda A.; And Others

    1998-01-01

    European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants, suggesting that differences in expressivity between European…

  19. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    ERIC Educational Resources Information Center

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  20. Strategies for Perceiving Facial Expressions in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.

    2014-01-01

    Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…

  1. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  2. Dynamic and Static Facial Expressions Decoded from Motion-Sensitive Areas in the Macaque Monkey

    PubMed Central

    Furl, Nicholas; Hadj-Bouziane, Fadila; Liu, Ning; Averbeck, Bruno B.; Ungerleider, Leslie G.

    2012-01-01

    Humans adeptly use visual motion to recognize socially-relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We employed functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas (Mf areas), which responded more to dynamic faces compared to static faces, and face-selective areas, which responded selectively to faces compared to objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and non-confusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in FST and STPm/LST, confirming their already well-established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion. PMID:23136433

  3. Mining 3D patterns from gene expression temporal data: a new tricluster evaluation measure.

    PubMed

    Gutiérrez-Avilés, David; Rubio-Escudero, Cristina

    2014-01-01

    Microarrays have revolutionized biotechnological research. The analysis of new data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are applied to create groups of genes that exhibit a similar behavior. Biclustering emerges as a valuable tool for microarray data analysis since it relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. However, if a third dimension appears in the data, triclustering is the appropriate tool for the analysis. This occurs in longitudinal experiments in which the genes are evaluated under conditions at several time points. All clustering, biclustering, and triclustering techniques guide their search for solutions by a measure that evaluates the quality of clusters. We present an evaluation measure for triclusters called Mean Square Residue 3D. This measure is based on the classic biclustering measure Mean Square Residue. Mean Square Residue 3D has been applied to both synthetic and real data and it has proved to be capable of extracting groups of genes with homogeneous patterns in subsets of conditions and times, and these groups have shown a high correlation level and they are also related to their functional annotations extracted from the Gene Ontology project. PMID:25143987

  4. Mining 3D Patterns from Gene Expression Temporal Data: A New Tricluster Evaluation Measure

    PubMed Central

    2014-01-01

    Microarrays have revolutionized biotechnological research. The analysis of new data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are applied to create groups of genes that exhibit a similar behavior. Biclustering emerges as a valuable tool for microarray data analysis since it relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. However, if a third dimension appears in the data, triclustering is the appropriate tool for the analysis. This occurs in longitudinal experiments in which the genes are evaluated under conditions at several time points. All clustering, biclustering, and triclustering techniques guide their search for solutions by a measure that evaluates the quality of clusters. We present an evaluation measure for triclusters called Mean Square Residue 3D. This measure is based on the classic biclustering measure Mean Square Residue. Mean Square Residue 3D has been applied to both synthetic and real data and it has proved to be capable of extracting groups of genes with homogeneous patterns in subsets of conditions and times, and these groups have shown a high correlation level and they are also related to their functional annotations extracted from the Gene Ontology project. PMID:25143987

  5. Singing emotionally: a study of pre-production, production, and post-production facial expressions

    PubMed Central

    Quinto, Lena R.; Thompson, William F.; Kroos, Christian; Palmer, Caroline

    2014-01-01

    Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as “neutral.” An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization. PMID:24808868

  6. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. PMID:26974742

  7. The mediating effects of facial expression on spatial interference between gaze direction and gaze location.

    PubMed

    Jones, Steve

    2015-01-01

    Gaze direction is an important social cue that interacts with facial expression. Cañadas and Lupiáñez (2012) reported a reverse-congruency effect such that identification of gaze direction was faster when a face was presented to the left but with the eyes directed to the right, or vice versa. In two experiments, this effect is replicated and then extended to explore the relationship between this effect and facial expression. Results show that the reverse-congruency effect is replicable with speeded gaze-direction identification, and that the effect is mediated by facial expression. The reverse-congruency effect is similar for happy and angry faces, but was not found for fearful faces. Findings are discussed in relation to the similarity of processing of incongruent gaze direction and the processing of direct gaze. PMID:25832740

  8. Explicit recognition of emotional facial expressions is shaped by expertise: evidence from professional actors

    PubMed Central

    Conson, Massimiliano; Ponari, Marta; Monteforte, Eva; Ricciato, Giusy; Sarà, Marco; Grossi, Dario; Trojano, Luigi

    2013-01-01

    Can reading others' emotional states be shaped by expertise? We assessed processing of emotional facial expressions in professional actors trained either to voluntary activate mimicry to reproduce character's emotions (as foreseen by the “Mimic Method”), or to infer others' inner states from reading the emotional context (as foreseen by “Stanislavski Method”). In explicit recognition of facial expressions (Experiment 1), the two experimental groups differed from each other and from a control group with no acting experience: the Mimic group was more accurate, whereas the Stanislavski group was slower. Neither acting experience, instead, influenced implicit processing of emotional faces (Experiment 2). We argue that expertise can selectively influence explicit recognition of others' facial expressions, depending on the kind of “emotional expertise”. PMID:23825467

  9. A selective emotional decision-making bias elicited by facial expressions.

    PubMed

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices. PMID:22438936

  10. Are facial expressions of emotion produced by categorical affect programs or dynamically driven by appraisal?

    PubMed

    Scherer, Klaus R; Ellgring, Heiner

    2007-02-01

    The different assumptions made by discrete and componential emotion theories about the nature of the facial expression of emotion and the underlying mechanisms are reviewed. Explicit and implicit predictions are derived from each model. It is argued that experimental expression-production paradigms rather than recognition studies are required to critically test these differential predictions. Data from a large-scale actor portrayal study are reported to demonstrate the utility of this approach. The frequencies with which 12 professional actors use major facial muscle actions individually and in combination to express 14 major emotions show little evidence for emotion-specific prototypical affect programs. Rather, the results encourage empirical investigation of componential emotion model predictions of dynamic configurations of appraisal-driven adaptive facial actions. PMID:17352568

  11. Regional structural styles in the northeast Netherlands as expressed on 3-D data

    SciTech Connect

    Goeyenbier, H. )

    1993-09-01

    The northeast Netherlands areas is a highly prospective gas province, containing the Groningen gas field and a multitude of smaller fields. Some 40 three-dimensional (3-D) seismic surveys have been acquired over the last 10 yr. covering a major part of this 15,000-km[sup 2] area. These surveys have been combined for the first time on a Landmark workstation to produce time, depth, and horizon attribute maps from six important (overburden and reservoir) levels: base Tertiary, base Chalk, base Cretaceous, base Jurassic, top Zechstein and base Zechstein. The structural history was reconstructed by analyzing isopach maps of the various units in combination with dip extractions along the mapped horizons to outline the active fault trends. Isopach maps of the Tertiary, Chalk, and Lower Cretaceous sediments reveal the salt movement during this interval with depocenters in the Lauwerszee trough as a result of salt withdrawal and salt diapirism in the areas of structural weakness near existing fault trends. The dip maps at the base of these units show the en-echelon fault pattern and the presence of crestal collapse systems above the salt domes. A comparison between base Cretaceous and base Chalk isopach maps also highlights the presence of inverted Lower Cretaceous basins. By comparing the overburden fault trends with the pre-Zechstein pattern, late faults can be separated from older trends, which has helped the prediction of sealing faults. The regional 3-D data provide a powerful and unambiguous tool to unravel the structural history in the northeast Netherlands.

  12. Behavioral and neural representation of emotional facial expressions across the lifespan

    PubMed Central

    Somerville, Leah H.; Fani, Negar; McClure-Tone, Erin B.

    2011-01-01

    Humans’ experience of emotion and comprehension of affective cues varies substantially across the lifespan. Work in cognitive and affective neuroscience has begun to characterize behavioral and neural responses to emotional cues that systematically change with age. This review examines work to date characterizing the maturation of facial expression comprehension, and dynamic changes in amygdala recruitment from early childhood through late adulthood while viewing facial expressions of emotion. Recent neuroimaging work has tested amygdala and prefrontal engagement in experimental paradigms mimicking real aspects of social interactions, which we highlight briefly, along with considerations for future research. PMID:21516541

  13. Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease

    PubMed Central

    Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice

    2008-01-01

    Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422

  14. Neural substrates of human facial expression of pleasant emotion induced by comic films: a PET Study.

    PubMed

    Iwase, Masao; Ouchi, Yasuomi; Okada, Hiroyuki; Yokoyama, Chihiro; Nobezawa, Shuji; Yoshikawa, Etsuji; Tsukada, Hideo; Takeda, Masaki; Yamashita, Ko; Takeda, Masatoshi; Yamaguti, Kouzi; Kuratsune, Hirohiko; Shimizu, Akira; Watanabe, Yasuyoshi

    2002-10-01

    Laughter or smile is one of the emotional expressions of pleasantness with characteristic contraction of the facial muscles, of which the neural substrate remains to be explored. This currently described study is the first to investigate the generation of human facial expression of pleasant emotion using positron emission tomography and H(2)(15)O. Regional cerebral blood flow (rCBF) during laughter/smile induced by visual comics and the magnitude of laughter/smile indicated significant correlation in the bilateral supplementary motor area (SMA) and left putamen (P < 0.05, corrected), but no correlation in the primary motor area (M1). In the voluntary facial movement, significant correlation between rCBF and the magnitude of EMG was found in the face area of bilateral M1 and the SMA (P < 0.001, uncorrected). Laughter/smile, as opposed to voluntary movement, activated the visual association areas, left anterior temporal cortex, left uncus, and orbitofrontal and medial prefrontal cortices (P < 0.05, corrected), whereas voluntary facial movement generated by mimicking a laughing/smiling face activated the face area of the left M1 and bilateral SMA, compared with laughter/smile (P < 0.05, corrected). We demonstrated distinct neural substrates of emotional and volitional facial expression and defined cognitive and experiential processes of a pleasant emotion, laughter/smile. PMID:12377151

  15. Proteomic comparison of 3D and 2D glioma models reveals increased HLA-E expression in 3D models is associated with resistance to NK cell-mediated cytotoxicity.

    PubMed

    He, Weiqi; Kuang, Yongqin; Xing, Xuemin; Simpson, Richard J; Huang, Haidong; Yang, Tao; Chen, Jingmin; Yang, Libin; Liu, Enyu; He, Weifeng; Gu, Jianwen

    2014-05-01

    Three-dimensional cell culture techniques can better reflect the in vivo characteristics of tumor cells compared with traditional monolayer cultures. Compared with their 2D counterparts, 3D-cultured tumor cells showed enhanced resistance to the cytotoxic T cell-mediated immune response. However, it remains unclear whether 3D-cultured tumor cells have an enhanced resistance to NK cell cytotoxicity. In this study, a total of 363 differentially expressed proteins were identified between the 2D- and 3D-cultured U251 cells by comparative proteomics, and an immune-associated protein-protein interaction (PPI) network based on these differential proteins was constructed by bioinformatics. Within the network, HLA-E, as a molecule for inhibiting NK cell activation, was significantly up-regulated in the 3D-cultured tumor cells. Then, we found that the 3D-cultured U251 cells exhibited potent resistance to NK cell cytotoxicity in vitro and were prone to tumor formation in vivo. The resistance of the 3D-cultured tumor cells to NK cell lysis was mediated by the HLA-E/NKG2A interaction because the administration of antibodies that block either HLA-E or NKG2A completely eliminated this resistance and significantly decreased tumor formation. Taken together, our findings indicate that HLA-E up-regulation in 3D-cultured cells may result in enhanced tumor resistance to NK cell-mediated immune response. PMID:24742303

  16. Sex Differences in Neural Activation to Facial Expressions Denoting Contempt and Disgust

    PubMed Central

    Aleman, André; Swart, Marte

    2008-01-01

    The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt) than women. We performed an experiment using functional magnetic resonance imaging (fMRI), in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus), anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions), in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our results suggest a

  17. Social Alienation in Schizophrenia Patients: Association with Insula Responsiveness to Facial Expressions of Disgust

    PubMed Central

    Lindner, Christian; Dannlowski, Udo; Walhöfer, Kirsten; Rödiger, Maike; Maisch, Birgit; Bauer, Jochen; Ohrmann, Patricia; Lencer, Rebekka; Zwitserlood, Pienie; Kersting, Anette; Heindel, Walter; Arolt, Volker

    2014-01-01

    Introduction Among the functional neuroimaging studies on emotional face processing in schizophrenia, few have used paradigms with facial expressions of disgust. In this study, we investigated whether schizophrenia patients show less insula activation to macro-expressions (overt, clearly visible expressions) and micro-expressions (covert, very brief expressions) of disgust than healthy controls. Furthermore, departing from the assumption that disgust faces signal social rejection, we examined whether perceptual sensitivity to disgust is related to social alienation in patients and controls. We hypothesized that high insula responsiveness to facial disgust predicts social alienation. Methods We used functional magnetic resonance imaging to measure insula activation in 36 schizophrenia patients and 40 healthy controls. During scanning, subjects passively viewed covert and overt presentations of disgust and neutral faces. To measure social alienation, a social loneliness scale and an agreeableness scale were administered. Results Schizophrenia patients exhibited reduced insula activation in response to covert facial expressions of disgust. With respect to macro-expressions of disgust, no between-group differences emerged. In patients, insula responsiveness to covert faces of disgust was positively correlated with social loneliness. Furthermore, patients' insula responsiveness to covert and overt faces of disgust was negatively correlated with agreeableness. In controls, insula responsiveness to covert expressions of disgust correlated negatively with agreeableness. Discussion Schizophrenia patients show reduced insula responsiveness to micro-expressions but not macro-expressions of disgust compared to healthy controls. In patients, low agreeableness was associated with stronger insula response to micro- and macro-expressions of disgust. Patients with a strong tendency to feel uncomfortable with social interactions appear to be characterized by a high sensitivity for

  18. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. PMID:26825440

  19. Facial expression recognition based on image Euclidean distance-supervised neighborhood preserving embedding

    NASA Astrophysics Data System (ADS)

    Chen, Li; Li, Yingjie; Li, Haibin

    2014-11-01

    High-dimensional data often lie on relatively low-dimensional manifold, while the nonlinear geometry of that manifold is often embedded in the similarities between the data points. These similar structures are captured by Neighborhood Preserving Embedding (NPE) effectively. But NPE as an unsupervised method can't utilize class information to guide the procedure of nonlinear dimensionality reduction. They ignore the geometrical structure information of local data points and the spatial information of pixels, which leads to the failure of classification. For this problem, a feature extraction method based on Image Euclidean Distance-Supervised NPE (IED-SNPE) is proposed, and is applied to facial expression recognition. Firstly, it employs Image Euclidean Distance (IED) to characterize the dissimilarity of data points. And then the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points. Finally, it fuses prior nonlinear facial expression manifold of facial expression images and class-label information to extract discriminative features for expression recognition. In the classification experiments on JAFFE facial expression database, IED-SNPE is used for feature extraction and compared with NPE, SNPE, and IED-NPE. The results reveal that IED-SNPE not only the local structure of expression manifold preserves well but also explicitly considers the spatial relationships among pixels in the images. So it excels NPE in feature extraction and is highly competitive with those well-known feature extraction methods.

  20. Facial Feedback Affects Perceived Intensity but Not Quality of Emotional Expressions.

    PubMed

    Lobmaier, Janek S; Fischer, Martin H

    2015-01-01

    Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others' emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing. PMID:26343732

  1. The AERO system: a 3D-like approach for recording gene expression patterns in the whole mouse embryo.

    PubMed

    Shimizu, Hirohito; Kubo, Atsushi; Uchibe, Kenta; Hashimoto, Megumi; Yokoyama, Shigetoshi; Takada, Shuji; Mitsuoka, Kazuhiko; Asahara, Hiroshi

    2013-01-01

    We have recently constructed a web-based database of gene expression in the mouse whole embryo, EMBRYS (http://embrys.jp/embrys/html/MainMenu.html). To allow examination of gene expression patterns to the fullest extent possible, this database provides both photo images and annotation data. However, since embryos develop via an intricate process of morphogenesis, it would be of great value to track embryonic gene expression from a three dimensional perspective. In fact, several methods have been developed to achieve this goal, but highly laborious procedures and specific operational skills are generally required. We utilized a novel microscopic technique that enables the easy capture of rotational, 3D-like images of the whole embryo. In this method, a rotary head equipped with two mirrors that are designed to obtain an image tilted at 45 degrees to the microscope stage captures serial images at 2-degree intervals. By a simple operation, 180 images are automatically collected. These 2D images obtained at multiple angles are then used to reconstruct 3D-like images, termed AERO images. By means of this system, over 800 AERO images of 191 gene expression patterns were captured. These images can be easily rotated on the computer screen using the EMBRYS database so that researchers can view an entire embryo by a virtual viewing on a computer screen in an unbiased or non-predetermined manner. The advantages afforded by this approach make it especially useful for generating data viewed in public databases. PMID:24146773

  2. The AERO System: A 3D-Like Approach for Recording Gene Expression Patterns in the Whole Mouse Embryo

    PubMed Central

    Hashimoto, Megumi; Yokoyama, Shigetoshi; Takada, Shuji; Mitsuoka, Kazuhiko; Asahara, Hiroshi

    2013-01-01

    We have recently constructed a web-based database of gene expression in the mouse whole embryo, EMBRYS (http://embrys.jp/embrys/html/MainMenu.html). To allow examination of gene expression patterns to the fullest extent possible, this database provides both photo images and annotation data. However, since embryos develop via an intricate process of morphogenesis, it would be of great value to track embryonic gene expression from a three dimensional perspective. In fact, several methods have been developed to achieve this goal, but highly laborious procedures and specific operational skills are generally required. We utilized a novel microscopic technique that enables the easy capture of rotational, 3D-like images of the whole embryo. In this method, a rotary head equipped with two mirrors that are designed to obtain an image tilted at 45 degrees to the microscope stage captures serial images at 2-degree intervals. By a simple operation, 180 images are automatically collected. These 2D images obtained at multiple angles are then used to reconstruct 3D-like images, termed AERO images. By means of this system, over 800 AERO images of 191 gene expression patterns were captured. These images can be easily rotated on the computer screen using the EMBRYS database so that researchers can view an entire embryo by a virtual viewing on a computer screen in an unbiased or non-predetermined manner. The advantages afforded by this approach make it especially useful for generating data viewed in public databases. PMID:24146773

  3. Automatic amygdala response to facial expression in schizophrenia: initial hyperresponsivity followed by hyporesponsivity

    PubMed Central

    2013-01-01

    Background It is well established that the amygdala is crucially involved in the processing of facial emotions. In schizophrenia patients, a number of neuroimaging findings suggest hypoactivation of the amygdala in response to facial emotion, while others indicate normal or enhanced recruitment of this region. Some of this variability may be related to the baseline condition used and the length of the experiment. There is evidence that schizophrenia patients display increased activation of the amygdala to neutral faces and that this is predominantly observed during early parts of the experiment. Recent research examining the automatic processing of facial emotion has also reported amygdala hyperactivation in schizophrenia. In the present study, we focused on the time-course of amygdala activation during the automatic processing of emotional facial expression. We hypothesized that in comparison to healthy subjects, patients would initially show hyperresponsivity of the amygdala to masked emotional and neutral faces. In addition, we expected amygdala deactivation in response to masked facial emotions from the first to the second phase to be more pronounced in patients than in controls. Results Amygdala activation in response to angry, happy, neutral, and no facial expression (presented for 33 ms) was measured by functional magnetic resonance imaging in 30 schizophrenia patients and 35 healthy controls. Across all subjects, the bilateral amygdala response to faces (relative to the no facial expression condition) was larger in the initial phase (first half of trials) than in the second phase (second half of trials). During the initial phase, schizophrenia patients exhibited an increased right amygdala response to all faces and an increased left amygdala response to neutral faces compared with controls. During the second phase, controls manifested a higher right amygdala response for all faces and a higher left amygdala response to angry faces than patients

  4. Facial Expression Recognition and Social Competence among African American Elementary School Children: An Examination of Ethnic Differences.

    ERIC Educational Resources Information Center

    Glanville, Denise N.; Nowicki, Steve

    2002-01-01

    Investigated the potential for cross-ethnic miscommunication of facial expressions, examining elementary students' ability to identify emotion in African American and white facial expressions and noting the relationship to social competence. Student data indicated that ability to read faces differing in ethnicity did not differ by children's…

  5. Emotional Facial and Vocal Expressions during Story Retelling by Children and Adolescents with High-Functioning Autism

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen

    2013-01-01

    Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were…

  6. The Relative Power of an Emotion's Facial Expression, Label, and Behavioral Consequence to Evoke Preschoolers' Knowledge of Its Cause

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2004-01-01

    Lay people and scientists alike assume that, especially for young children, facial expressions are a strong cue to another's emotion. We report a study in which children (N=120; 3-4 years) described events that would cause basic emotions (surprise, fear, anger, disgust, sadness) presented as its facial expression, as its label, or as its…

  7. Intermodal Perception of Fully Illuminated and Point Light Displays of Dynamic Facial Expressions by 7-Month-Old Infants.

    ERIC Educational Resources Information Center

    Soken, Nelson; And Others

    This study considered two questions about infants' perception of affective expressions: (1) Can infants distinguish between happiness and anger on the basis of facial motion information alone? (2) Can infants detect a correspondence between happy and angry facial and vocal expressions by different people? A total of 40 infants of 7 months of age…

  8. Social Adjustment, Academic Adjustment, and the Ability to Identify Emotion in Facial Expressions of 7-Year-Old Children

    ERIC Educational Resources Information Center

    Goodfellow, Stephanie; Nowicki, Stephen, Jr.

    2009-01-01

    The authors aimed to examine the possible association between (a) accurately reading emotion in facial expressions and (b) social and academic competence among elementary school-aged children. Participants were 840 7-year-old children who completed a test of the ability to read emotion in facial expressions. Teachers rated children's social and…

  9. Personal identification by the comparison of facial profiles: testing the reliability of a high-resolution 3D-2D comparison model.

    PubMed

    Cattaneo, Cristina; Cantatore, Angela; Ciaffi, Romina; Gibelli, Daniele; Cigada, Alfredo; De Angelis, Danilo; Sala, Remo

    2012-01-01

    Identification from video surveillance systems is frequently requested in forensic practice. The "3D-2D" comparison has proven to be reliable in assessing identification but still requires standardization; this study concerns the validation of the 3D-2D profile comparison. The 3D models of the faces of five individuals were compared with photographs from the same subjects as well as from another 45 individuals. The difference in area and distance between maxima (glabella, tip of nose, fore point of upper and lower lips, pogonion) and minima points (selion, subnasale, stomion, suprapogonion) were measured. The highest difference in area between the 3D model and the 2D image was between 43 and 133 mm(2) in the five matches, always greater than 157 mm(2) in mismatches; the mean distance between the points was greater than 1.96 mm in mismatches, <1.9 mm in five matches (p < 0.05). These results indicate that this difference in areas may point toward a manner of distinguishing "correct" from "incorrect" matches. PMID:22074112

  10. Perceptions of social dominance through facial emotion expressions in euthymic patients with bipolar I disorder.

    PubMed

    Kim, Sung Hwa; Ryu, Vin; Ha, Ra Yeon; Lee, Su Jin; Cho, Hyun-Sang

    2016-04-01

    The ability to accurately perceive dominance in the social hierarchy is important for successful social interactions. However, little is known about dominance perception of emotional stimuli in bipolar disorder. The aim of this study was to investigate the perception of social dominance in patients with bipolar I disorder in response to six facial emotional expressions. Participants included 35 euthymic patients and 45 healthy controls. Bipolar patients showed a lower perception of social dominance based on anger, disgust, fear, and neutral facial emotional expressions compared to healthy controls. A negative correlation was observed between motivation to pursue goals or residual manic symptoms and perceived dominance of negative facial emotions such as anger, disgust, and fear in bipolar patients. These results suggest that bipolar patients have an altered perception of social dominance that might result in poor interpersonal functioning. Training of appropriate dominance perception using various emotional stimuli may be helpful in improving social relationships for individuals with bipolar disorder. PMID:26995253

  11. Interaction between musical emotion and facial expression as measured by event-related potentials.

    PubMed

    Kamiyama, Keiko S; Abla, Dilshat; Iwanaga, Koichi; Okanoya, Kazuo

    2013-02-01

    We examined the integrative process between emotional facial expressions and musical excerpts by using an affective priming paradigm. Happy or sad musical stimuli were presented after happy or sad facial images during electroencephalography (EEG) recordings. We asked participants to judge the affective congruency of the presented face-music pairs. The congruency of emotionally congruent pairs was judged more rapidly than that of incongruent pairs. In addition, the EEG data showed that incongruent musical targets elicited a larger N400 component than congruent pairs. Furthermore, these effects occurred in nonmusicians as well as musicians. In sum, emotional integrative processing of face-music pairs was facilitated in congruent music targets and inhibited in incongruent music targets; this process was not significantly modulated by individual musical experience. This is the first study on musical stimuli primed by facial expressions to demonstrate that the N400 component reflects the affective priming effect. PMID:23220447

  12. Exploring the seismic expression of fault zones in 3D seismic volumes

    NASA Astrophysics Data System (ADS)

    Iacopini, D.; Butler, R. W. H.; Purves, S.; McArdle, N.; De Freslon, N.

    2016-08-01

    Mapping and understanding distributed deformation is a major challenge for the structural interpretation of seismic data. However, volumes of seismic signal disturbance with low signal/noise ratio are systematically observed within 3D seismic datasets around fault systems. These seismic disturbance zones (SDZ) are commonly characterized by complex perturbations of the signal and occur at the sub-seismic (10 s m) to seismic scale (100 s m). They may store important information on deformation distributed around those larger scale structures that may be readily interpreted in conventional amplitude displays of seismic data. We introduce a method to detect fault-related disturbance zones and to discriminate between this and other noise sources such as those associated with the seismic acquisition (footprint noise). Two case studies from the Taranaki basin and deep-water Niger delta are presented. These resolve SDZs using tensor and semblance attributes along with conventional seismic mapping. The tensor attribute is more efficient in tracking volumes containing structural displacements while structurally-oriented semblance coherency is commonly disturbed by small waveform variations around the fault throw. We propose a workflow to map and cross-plot seismic waveform signal properties extracted from the seismic disturbance zone as a tool to investigate the seismic signature and explore seismic facies of a SDZ.

  13. Exploring the seismic expression of fault zones in 3D seismic volumes

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2016-04-01

    Mapping and understanding distributed deformation is a major challenge for the structural interpretation of seismic data. However, volumes of seismic signal disturbance with low signal/noise ratio are systematically observed within 3D seismic datasets around fault systems. These seismic disturbance zones (SDZ) are commonly characterized by complex perturbations of the signal and occur at the sub-seismic to seismic scale. They may store important information on deformation distributed around those larger scale structures that may be readily interpreted in conventional amplitude displays of seismic data scale. We introduce a method to detect fault-related disturbance zones and to discriminate between this and other noise sources such as those associated with the seismic acquisition (footprint noise). Two case studies, from the Taranaki basin and deep-water Niger delta are presented. These resolve structure within SDZs using tensor and semblance attributes along with conventional seismic mapping. The tensor attribute is more efficient in tracking volumes containing structural displacements while structurally-oriented semblance coherency is commonly disturbed by small waveform variations around the fault throw. We propose a workflow to map and cross-plot seismic waveform signal properties extracted from the seismic disturbance zone as a tool to investigate the seismic signature and explore seismic facies of a SDZ.

  14. Multiple faces of pain: effects of chronic pain on the brain regulation of facial expression.

    PubMed

    Vachon-Presseau, Etienne; Roy, Mathieu; Woo, Choong-Wan; Kunz, Miriam; Martel, Marc-Olivier; Sullivan, Michael J; Jackson, Philip L; Wager, Tor D; Rainville, Pierre

    2016-08-01

    Pain behaviors are shaped by social demands and learning processes, and chronic pain has been previously suggested to affect their meaning. In this study, we combined functional magnetic resonance imaging with in-scanner video recording during thermal pain stimulations and use multilevel mediation analyses to study the brain mediators of pain facial expressions and the perception of pain intensity (self-reports) in healthy individuals and patients with chronic back pain (CBP). Behavioral data showed that the relation between pain expression and pain report was disrupted in CBP. In both patients with CBP and healthy controls, brain activity varying on a trial-by-trial basis with pain facial expressions was mainly located in the primary motor cortex and completely dissociated from the pattern of brain activity varying with pain intensity ratings. Stronger activity was observed in CBP specifically during pain facial expressions in several nonmotor brain regions such as the medial prefrontal cortex, the precuneus, and the medial temporal lobe. In sharp contrast, no moderating effect of chronic pain was observed on brain activity associated with pain intensity ratings. Our results demonstrate that pain facial expressions and pain intensity ratings reflect different aspects of pain processing and support psychosocial models of pain suggesting that distinctive mechanisms are involved in the regulation of pain behaviors in chronic pain. PMID:27411160

  15. Human amygdala response to dynamic facial expressions of positive and negative surprise.

    PubMed

    Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David

    2014-02-01

    Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PMID:24219397

  16. Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations.

    PubMed

    Wenzler, Sofia; Levine, Sarah; van Dick, Rolf; Oertel-Knöchel, Viola; Aviezer, Hillel

    2016-09-01

    According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary people's spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record PMID:27337681

  17. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    PubMed Central

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  18. Investigating the brain basis of facial expression perception using multi-voxel pattern analysis.

    PubMed

    Wegrzyn, Martin; Riehle, Marcel; Labudda, Kirsten; Woermann, Friedrich; Baumgartner, Florian; Pollmann, Stefan; Bien, Christian G; Kissler, Johanna

    2015-08-01

    Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised. PMID:26046623

  19. Single-trial ERP analysis reveals facial expression category in a three-stage scheme.

    PubMed

    Zhang, Dandan; Luo, Wenbo; Luo, Yuejia

    2013-05-28

    Emotional faces are salient stimuli that play a critical role in social interactions. Following up on previous research suggesting that the event-related potentials (ERPs) show differential amplitudes in response to various facial expressions, the current study used trial-to-trial variability assembled from six discriminating ERP components to predict the facial expression categories in individual trials. In an experiment involved 17 participants, fearful trials were differentiated from non-fearful trials as early as the intervals of N1 and P1, with a mean predictive accuracy of 87%. Single-trial features in the occurrence of N170 and vertex positive potential could distinguish between emotional and neutral expressions (accuracy=90%). Finally, the trials associated with fearful, happy, and neutral faces were completely separated during the window of N3 and P3 (accuracy=83%). These categorization findings elucidated the temporal evolution of facial expression extraction, and demonstrated that the spatio-temporal characteristics of single-trial ERPs can distinguish facial expressions according to a three-stage scheme, with "fear popup," "emotional/unemotional discrimination," and "complete separation" as processing stages. This work constitutes the first examination of neural processing dynamics beyond multitrial ERP averaging, and directly relates the prediction performance of single-trial classifiers to the progressive brain functions of emotional face discrimination. PMID:23566819

  20. Does Parkinson's disease lead to alterations in the facial expression of pain?

    PubMed

    Priebe, Janosch A; Kunz, Miriam; Morcinek, Christian; Rieckmann, Peter; Lautenbacher, Stefan

    2015-12-15

    Hypomimia which refers to a reduced degree in facial expressiveness is a common sign in Parkinson's disease (PD). The objective of our study was to investigate how hypomimia affects PD patients' facial expression of pain. The facial expressions of 23 idiopathic PD patients in the Off-phase (without dopaminergic medication) and On-phase (after dopaminergic medication intake) and 23 matched controls in response to phasic heat-pain and a temporal summation procedure were recorded and analyzed for overall and specific alterations using the Facial Action Coding System (FACS). We found reduced overall facial activity in response to pain in PD patients in the Off which was less pronounced in the On. Especially the highly pain-relevant eye-narrowing occurred less frequently in PD patients than in controls in both phases while frequencies of other pain-relevant movements, like upper lip raise (in the On) and contraction of the eyebrows (in both phases), did not differ between groups. Moreover, opening of the mouth (which is often not considered as pain-relevant) was the most frequently displayed movement in PD patients, whereas eye-narrowing was the most frequent movement in controls. Not only overall quantitative changes in the degree of facial pain expressiveness occurred in PD patients but also qualitative changes were found. The latter refer to a strongly affected encoding of the sensory dimension of pain (eye-narrowing) while the encoding of the affective dimension of pain (contradiction of the eyebrows) was preserved. This imbalanced pain signal might affect pain communication and pain assessment. PMID:26671119

  1. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    PubMed

    Palumbo, Letizia; Jellema, Tjeerd

    2013-01-01

    Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding. PMID:23409112

  2. The role of spatial frequency information in the recognition of facial expressions of pain.

    PubMed

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features. PMID:26075962

  3. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark; Knowles, David W.; Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2011-03-30

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.

  4. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions. PMID:25398479

  5. Lossless 3-D reconstruction and registration of semi-quantitative gene expression data in the mouse brain.

    PubMed

    Enlow, Matthew A; Ju, Tao; Kakadiaris, Ioannis A; Carson, James P

    2011-01-01

    As imaging, computing, and data storage technologies improve, there is an increasing opportunity for multiscale analysis of three-dimensional datasets (3-D). Such analysis enables, for example, microscale elements of multiple macroscale specimens to be compared throughout the entire macroscale specimen. Spatial comparisons require bringing datasets into co-alignment. One approach for co-alignment involves elastic deformations of data in addition to rigid alignments. The elastic deformations distort space, and if not accounted for, can distort the information at the microscale. The algorithms developed in this work address this issue by allowing multiple data points to be encoded into a single image pixel, appropriately tracking each data point to ensure lossless data mapping during elastic spatial deformation. This approach was developed and implemented for both 2-D and 3D registration of images. Lossless reconstruction and registration was applied to semi-quantitative cellular gene expression data in the mouse brain, enabling comparison of multiple spatially registered 3-D datasets without any augmentation of the cellular data. Standard reconstruction and registration without the lossless approach resulted in errors in cellular quantities of ∼ 8%. PMID:22256218

  6. Hypertrophy, gene expression, and beating of neonatal cardiac myocytes are affected by microdomain heterogeneity in 3D

    PubMed Central

    Curtis, Matthew W.; Sharma, Sadhana; Desai, Tejal A.

    2011-01-01

    Cardiac myocytes are known to be influenced by the rigidity and topography of their physical microenvironment. It was hypothesized that 3D heterogeneity introduced by purely physical microdomains regulates cardiac myocyte size and contraction. This was tested in vitro using polymeric microstructures (G′=1.66 GPa) suspended with random orientation in 3D by a soft Matrigel matrix (G′=22.9 Pa). After 10 days of culture, the presence of 100 μm-long microstructures in 3D gels induced fold increases in neonatal rat ventricular myocyte size (1.61±0.06, p<0.01) and total protein/cell ratios (1.43± 0.08, p<0.05) that were comparable to those induced chemically by 50 μM phenylephrine treatment. Upon attachment to microstructures, individual myocytes also had larger cross-sectional areas (1.57±0.05, p<0.01) and higher average rates of spontaneous contraction (2.01±0.08, p<0.01) than unattached myocytes. Furthermore, the inclusion of microstructures in myocyte-seeded gels caused significant increases in the expression of beta-1 adrenergic receptor (β1-AR, 1.19±0.01), cardiac ankyrin repeat protein (CARP, 1.26±0.02), and sarcoplasmic reticulum calcium-ATPase (SERCA2, 1.59±0.12, p<0.05), genes implicated in hypertrophy and contractile activity. Together, the results demonstrate that cardiac myocyte behavior can be controlled through local 3D microdomains alone. This approach of defining physical cues as independent features may help to advance the elemental design considerations for scaffolds in cardiac tissue engineering and therapeutic microdevices. PMID:20668947

  7. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. PMID:25706834

  8. A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives.

    PubMed

    Martinez, Aleix; Du, Shichuan

    2012-05-01

    In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can

  9. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  10. Recognition of Facial Expressions of Emotion by Children with Attention-Deficit Hyperactivity Disorder.

    ERIC Educational Resources Information Center

    Singh, Subhashni D.; Ellis, Cynthia R.; Winton, Alan S. W.; Singh, Nirbhay N.; Leung, Jin Pang; Oswald, Donald P.

    1998-01-01

    Difficulties with a wide range of social interactions are experienced by children with ADHD. Fifty children and adolescents were tested for ability to recognize the six basic facial expressions of emotion. Results are reported; implications for remediation of social skill deficits commonly seen in children with ADHD are discussed. (Author/EMK)

  11. Enhancing the Recognition and Production of Facial Expressions of Emotion by Children with Mental Retardation.

    ERIC Educational Resources Information Center

    Stewart, Claire A.; Singh, Nirbhay N.

    1995-01-01

    In 2 experiments with 6 boys (ages 12 and 13) having mild and moderate mental retardation, directed rehearsal was used to teach subjects to either recognize or produce 6 basic facial expressions of emotion. Training in both skills was effective, and the recognition training was maintained at 8-week and 12-week assessments following instruction.…

  12. Understanding Emotions from Standardized Facial Expressions in Autism and Normal Development

    ERIC Educational Resources Information Center

    Castelli, Fulvia

    2005-01-01

    The study investigated the recognition of standardized facial expressions of emotion (anger, fear, disgust, happiness, sadness, surprise) at a perceptual level (experiment 1) and at a semantic level (experiments 2 and 3) in children with autism (N= 20) and normally developing children (N= 20). Results revealed that children with autism were as…

  13. The Umeå University Database of Facial Expressions: A Validation Study

    PubMed Central

    2012-01-01

    Background A set of face stimuli, called the Umeå University Database of Facial Expressions, is described. The set consists of 30 female and 30 male models aged 17–67 years (M = 30.19, SD = 10.66). Each model shows seven different facial expressions (angry, surprised, happy, sad, neutral, afraid, and disgusted). Most models are ethnic Swedes but models of Central European, Arabic, and Asian origin are also included. Objective Creating and validating a new database of facial expressions that can be used for scientific experiments. Methods The images, presented in random order one at a time, were validated by 526 volunteers rating on average 125 images on seven 10-point Likert-type scales ranging from “completely disagree” to “completely agree” for each emotion. Results The proportion of the aggregated results that were correctly classified was considered to be high (M = 88%). Conclusions The results lend empirical support for the validity of this set of facial expressions. The set can be used freely by the scientific community. PMID:23047935

  14. Cradling Side Preference Is Associated with Lateralized Processing of Baby Facial Expressions in Females

    ERIC Educational Resources Information Center

    Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut

    2009-01-01

    Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…

  15. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    ERIC Educational Resources Information Center

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6-month-old infant-mother dyads who each engaged in a face-to-face…

  16. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  17. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  18. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    ERIC Educational Resources Information Center

    Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.

    2012-01-01

    Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…

  19. Culture shapes 7-month-olds' perceptual strategies in discriminating facial expressions of emotion.

    PubMed

    Geangu, Elena; Ichikawa, Hiroko; Lao, Junpeng; Kanazawa, So; Yamaguchi, Masami K; Caldara, Roberto; Turati, Chiara

    2016-07-25

    Emotional facial expressions are thought to have evolved because they play a crucial role in species' survival. From infancy, humans develop dedicated neural circuits [1] to exhibit and recognize a variety of facial expressions [2]. But there is increasing evidence that culture specifies when and how certain emotions can be expressed - social norms - and that the mature perceptual mechanisms used to transmit and decode the visual information from emotional signals differ between Western and Eastern adults [3-5]. Specifically, the mouth is more informative for transmitting emotional signals in Westerners and the eye region for Easterners [4], generating culture-specific fixation biases towards these features [5]. During development, it is recognized that cultural differences can be observed at the level of emotional reactivity and regulation [6], and to the culturally dominant modes of attention [7]. Nonetheless, to our knowledge no study has explored whether culture shapes the processing of facial emotional signals early in development. The data we report here show that, by 7 months, infants from both cultures visually discriminate facial expressions of emotion by relying on culturally distinct fixation strategies, resembling those used by the adults from the environment in which they develop [5]. PMID:27458908

  20. Assessment of Learners' Attention to E-Learning by Monitoring Facial Expressions for Computer Network Courses

    ERIC Educational Resources Information Center

    Chen, Hong-Ren

    2012-01-01

    Recognition of students' facial expressions can be used to understand their level of attention. In a traditional classroom setting, teachers guide the classes and continuously monitor and engage the students to evaluate their understanding and progress. Given the current popularity of e-learning environments, it has become important to assess the…

  1. Concealing of facial expressions by a wild Barbary macaque (Macaca sylvanus).

    PubMed

    Thunström, Maria; Kuchenbuch, Paul; Young, Christopher

    2014-07-01

    Behavioural research on non-vocal communication among non-human primates and its possible links to the origin of human language is a long-standing research topic. Because human language is under voluntary control, it is of interest whether this is also true for any communicative signals of other species. It has been argued that the behaviour of hiding a facial expression with one's hand supports the idea that gestures might be under more voluntary control than facial expressions among non-human primates, and it has also been interpreted as a sign of intentionality. So far, the behaviour has only been reported twice, for single gorilla and chimpanzee individuals, both in captivity. Here, we report the first observation of concealing of facial expressions by a monkey, a Barbary macaque (Macaca sylvanus), living in the wild. On eight separate occasions between 2009 and 2011 an adult male was filmed concealing two different facial expressions associated with play and aggression ("play face" and "scream face"), 22 times in total. The videos were analysed in detail, including gaze direction, hand usage, duration, and individuals present. This male was the only individual in his group to manifest this behaviour, which always occurred in the presence of a dominant male. Several possible interpretations of the function of the behaviour are discussed. The observations in this study indicate that the gestural communication and cognitive abilities of monkeys warrant more research attention. PMID:24770588

  2. Similarities and Differences in the Perceptual Structure of Facial Expressions of Children and Adults

    ERIC Educational Resources Information Center

    Gao, Xiaoqing; Maurer, Daphne; Nishimura, Mayu

    2010-01-01

    We explored the perceptual structure of facial expressions of six basic emotions, varying systematically in intensity, in adults and children aged 7 and 14 years. Multidimensional scaling suggested that three- or four-dimensional structures were optimal for all groups. Two groups of adults demonstrated nearly identical structure, which had…

  3. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  4. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  5. Effects of Context and Facial Expression on Imitation Tasks in Preschool Children with Autism

    ERIC Educational Resources Information Center

    Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George

    2013-01-01

    The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with…

  6. The Effects of Early Institutionalization on the Discrimination of Facial Expressions of Emotion in Young Children

    ERIC Educational Resources Information Center

    Jeon, Hana; Moulson, Margaret C.; Fox, Nathan; Zeanah, Charles; Nelson, Charles A., III

    2010-01-01

    The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care-as-Usual (institutional care) following a baseline assessment. Another group consisted of children…

  7. Processing of Facial Expressions of Emotions by Adults with Down Syndrome and Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…

  8. The Facial Expression of Anger in Seven-Month-Old Infants.

    ERIC Educational Resources Information Center

    Stenberg, Craig R.; And Others

    1983-01-01

    Investigated whether, in a sample of 30 infants, anger could reliably be observed in facial expressions as early as seven months of age. Also considered was the influence of several variables on anger responses: infants' familiarity with the frustrator, repetition of trials, and sex of the child. (Author/RH)

  9. Infants' Intermodal Perception of Canine ("Canis Familairis") Facial Expressions and Vocalizations

    ERIC Educational Resources Information Center

    Flom, Ross; Whipple, Heather; Hyde, Daniel

    2009-01-01

    From birth, human infants are able to perceive a wide range of intersensory relationships. The current experiment examined whether infants between 6 months and 24 months old perceive the intermodal relationship between aggressive and nonaggressive canine vocalizations (i.e., barks) and appropriate canine facial expressions. Infants simultaneously…

  10. Developmental Changes in Infants' Processing of Happy and Angry Facial Expressions: A Neurobehavioral Study

    ERIC Educational Resources Information Center

    Grossmann, Tobias; Striano, Tricia; Friederici, Angela D.

    2007-01-01

    Event-related brain potentials were measured in 7- and 12-month-old infants to examine the development of processing happy and angry facial expressions. In 7-month-olds a larger negativity to happy faces was observed at frontal, central, temporal and parietal sites (Experiment 1), whereas 12-month-olds showed a larger negativity to angry faces at…

  11. Facial Expression of Affect in Children with Cornelia de Lange Syndrome

    ERIC Educational Resources Information Center

    Collis, L.; Moss, J.; Jutley, J.; Cornish, K.; Oliver, C.

    2008-01-01

    Background: Individuals with Cornelia de Lange syndrome (CdLS) have been reported to show comparatively high levels of flat and negative affect but there have been no empirical evaluations. In this study, we use an objective measure of facial expression to compare affect in CdLS with that seen in Cri du Chat syndrome (CDC) and a group of…

  12. Neural evidence for cultural differences in the valuation of positive facial expressions.

    PubMed

    Park, BoKyung; Tsai, Jeanne L; Chim, Louise; Blevins, Elizabeth; Knutson, Brian

    2016-02-01

    European Americans value excitement more and calm less than Chinese. Within cultures, European Americans value excited and calm states similarly, whereas Chinese value calm more than excited states. To examine how these cultural differences influence people's immediate responses to excited vs calm facial expressions, we combined a facial rating task with functional magnetic resonance imaging. During scanning, European American (n = 19) and Chinese (n = 19) females viewed and rated faces that varied by expression (excited, calm), ethnicity (White, Asian) and gender (male, female). As predicted, European Americans showed greater activity in circuits associated with affect and reward (bilateral ventral striatum, left caudate) while viewing excited vs calm expressions than did Chinese. Within cultures, European Americans responded to excited vs calm expressions similarly, whereas Chinese showed greater activity in these circuits in response to calm vs excited expressions regardless of targets' ethnicity or gender. Across cultural groups, greater ventral striatal activity while viewing excited vs. calm expressions predicted greater preference for excited vs calm expressions months later. These findings provide neural evidence that people find viewing the specific positive facial expressions valued by their cultures to be rewarding and relevant. PMID:26342220

  13. Hemodynamic response of children with attention-deficit and hyperactive disorder (ADHD) to emotional facial expressions.

    PubMed

    Ichikawa, Hiroko; Nakato, Emi; Kanazawa, So; Shimamura, Keiichi; Sakuta, Yuiko; Sakuta, Ryoichi; Yamaguchi, Masami K; Kakigi, Ryusuke

    2014-10-01

    Children with attention-deficit/hyperactivity disorder (ADHD) have difficulty recognizing facial expressions. They identify angry expressions less accurately than typically developing (TD) children, yet little is known about their atypical neural basis for the recognition of facial expressions. Here, we used near-infrared spectroscopy (NIRS) to examine the distinctive cerebral hemodynamics of ADHD and TD children while they viewed happy and angry expressions. We measured the hemodynamic responses of 13 ADHD boys and 13 TD boys to happy and angry expressions at their bilateral temporal areas, which are sensitive to face processing. The ADHD children showed an increased concentration of oxy-Hb for happy faces but not for angry faces, while TD children showed increased oxy-Hb for both faces. Moreover, the individual peak latency of hemodynamic response in the right temporal area showed significantly greater variance in the ADHD group than in the TD group. Such atypical brain activity observed in ADHD boys may relate to their preserved ability to recognize a happy expression and their difficulty recognizing an angry expression. We firstly demonstrated that NIRS can be used to detect atypical hemodynamic response to facial expressions in ADHD children. PMID:25152531

  14. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study.

    PubMed

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    "Emotional numbing" is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent's Report of the Child's Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes ('baseline video') followed by a 2-min video clip from a television comedy ('comedy video'). Children's facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children's reactions to disasters. PMID:26528206

  15. A new approach to measuring individual differences in sensitivity to facial expressions: influence of temperamental shyness and sociability.

    PubMed

    Gao, Xiaoqing; Chiesa, Julia; Maurer, Daphne; Schmidt, Louis A

    2014-01-01

    To examine individual differences in adults' sensitivity to facial expressions, we used a novel method that has proved revealing in studies of developmental change. Using static faces morphed to show different intensities of facial expressions, we calculated two measures: (1) the threshold to detect that a low intensity facial expression is different from neutral, and (2) accuracy in recognizing the specific facial expression in faces above the detection threshold. We conducted two experiments with young adult females varying in reported temperamental shyness and sociability - the former trait is known to influence the recognition of facial expressions during childhood. In both experiments, the measures had good split half reliability. Because shyness was significantly negatively correlated with sociability, we used partial correlations to examine the relation of each to sensitivity to facial expressions. Sociability was negatively related to threshold to detect fear (Experiment 1) and to misidentify fear as another expression or happy expressions as fear (Experiment 2). Both patterns are consistent with hypervigilance by less sociable individuals. Shyness was positively related to misidentification of fear as another emotion (Experiment 2), a pattern consistent with a history of avoidance. We discuss the advantages and limitations of this new approach for studying individual differences in sensitivity to facial expressions. PMID:24550857

  16. A review of recent advances in 3D face recognition

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Geng, Shuze; Xiao, Zhaoxia; Xiu, Chunbo

    2015-03-01

    Face recognition based on machine vision has achieved great advances and been widely used in the various fields. However, there are some challenges on the face recognition, such as facial pose, variations in illumination, and facial expression. So, this paper gives the recent advances in 3D face recognition. 3D face recognition approaches are categorized into four groups: minutiae approach, space transform approach, geometric features approach, model approach. Several typical approaches are compared in detail, including feature extraction, recognition algorithm, and the performance of the algorithm. Finally, this paper summarized the challenge existing in 3D face recognition and the future trend. This paper aims to help the researches majoring on face recognition.

  17. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    PubMed

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information. PMID:26375031

  18. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study

    PubMed Central

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information. PMID:26375031

  19. Pulmonary surfactant expression analysis--role of cell-cell interactions and 3-D tissue-like architecture.

    PubMed

    Nandkumar, Maya A; Ashna, U; Thomas, Lynda V; Nair, Prabha D

    2015-03-01

    Surfactant production is important in maintaining alveolar function both in vivo and in vitro, but surfactant expression is the primary property lost by alveolar Type II Pneumocytes in culture and its maintenance is a functional requirement. To develop a functional tissue-like model, the in vivo cell-cell interactions and three dimensional architecture has to be reproduced. To this end, 3D button-shaped synthetic gelatin vinyl acetate (GeVAc) co-polymer scaffold was seeded with different types of lung cells. Functionality of the construct was studied under both static and dynamic conditions. The construct was characterized by Environmental Scanning Electron and fluorescent microscopy, and functionality of the system was analyzed by studying mRNA modulations of all four surfactant genes A, B, C, and D by real time-PCR and varying culture conditions. The scaffold supports alveolar cell adhesion and maintenance of cuboidal morphology, and the alveolar-specific property of surfactant synthesis, which would otherwise be rapidly lost in culture. This is a novel 3D system that expresses all 4 surfactants for a culture duration of 3 weeks. PMID:25262918

  20. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions. PMID:21861684

  1. Facial expression recognition and histograms of oriented gradients: a comprehensive study.

    PubMed

    Carcagnì, Pierluigi; Del Coco, Marco; Leo, Marco; Distante, Cosimo

    2015-01-01

    Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction. PMID:26543779

  2. The facilitative effect of facial expression on the self-generation of emotion.

    PubMed

    Hess, U; Kappas, A; McHugo, G J; Lanzetta, J T; Kleck, R E

    1992-05-01

    Twenty-seven female undergraduates completed three tasks: (1) feel four emotions (happiness, sadness, anger, peacefulness); (2) express these emotions, without trying to feel them; and (3) feel and express clearly these four emotions. During each trial subjects pressed a button to indicate when they had reached the required state, and the latency from emotion cue to button press was measured. Heart rate, skin conductance and EMG from four facial sites (brow, cheek, jaw and mouth) were recorded for 15 s before and after the button press and during a baseline period prior to each trial. Self-reports were obtained after each trial. Facial EMG and patterns of autonomic arousal differentiated among the four emotions within each task. Shorter self-generation latency in the Feel-and-Show versus the Feel condition indicated the facilitative effect of facial expression on the self-generation of emotion. Furthermore, the presence of autonomic changes and self-reported affect in the Show condition supports the sufficiency version of the facial feedback hypothesis. The self-generation method employed as an emotion elicitor was shown to reliably induce emotional reactions and is proposed as a useful technique for the elicitation of various emotional states in the laboratory. PMID:1639672

  3. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  4. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  5. Pulsatile exposure to simulated reflux leads to changes in gene expression in a 3D model of oesophageal mucosa

    PubMed Central

    Green, Nicola H; Nicholls, Zoe; Heath, Paul R; Cooper-Knock, Jonathan; Corfe, Bernard M; MacNeil, Sheila; Bury, Jonathan P

    2014-01-01

    Oesophageal exposure to duodenogastroesophageal refluxate is implicated in the development of Barrett's metaplasia (BM), with increased risk of progression to oesophageal adenocarcinoma. The literature proposes that reflux exposure activates NF-κB, driving the aberrant expression of intestine-specific caudal-related homeobox (CDX) genes. However, early events in the pathogenesis of BM from normal epithelium are poorly understood. To investigate this, our study subjected a 3D model of the normal human oesophageal mucosa to repeated, pulsatile exposure to specific bile components and examined changes in gene expression. Initial 2D experiments with a range of bile salts observed that taurochenodeoxycholate (TCDC) impacted upon NF-κB activation without causing cell death. Informed by this, the 3D oesophageal model was repeatedly exposed to TCDC in the presence and absence of acid, and the epithelial cells underwent gene expression profiling. We identified ∼300 differentially expressed genes following each treatment, with a large and significant overlap between treatments. Enrichment analysis (Broad GSEA, DAVID and Metacore™; GeneGo Inc) identified multiple gene sets related to cell signalling, inflammation, proliferation, differentiation and cell adhesion. Specifically NF-κB activation, Wnt signalling, cell adhesion and targets for the transcription factors PTF1A and HNF4α were highlighted. Our data suggest that HNF4α isoform switching may be an early event in Barrett's pathogenesis. CDX1/2 targets were, however, not enriched, suggesting that although CDX1/2 activation reportedly plays a role in BM development, it may not be an initial event. Our findings highlight new areas for investigation in the earliest stages of BM pathogenesis of oesophageal diseases and new potential therapeutic targets. PMID:24713057

  6. Enhanced simultaneous detection of ractopamine and salbutamol--Via electrochemical-facial deposition of MnO2 nanoflowers onto 3D RGO/Ni foam templates.

    PubMed

    Wang, Ming Yan; Zhu, Wei; Ma, Lin; Ma, Juan Juan; Zhang, Dong En; Tong, Zhi Wei; Chen, Jun

    2016-04-15

    In this paper, we report a facile method to successfully fabricate MnO2 nanoflowers loaded onto 3D RGO@nickel foam, showing enhanced biosensing activity due to the improved structural integration of different electrode materials components. When the as-prepared 3D hybrid electrodes were investigated as a binder-free biosensor, two well-defined and separate differential pulse voltammetric peaks for ractopamine (RAC) and salbutamol (SAL) were observed, indicating the simultaneous selective detection of both β-agonists possible. The MnO2/RGO@NF sensor also demonstrated a linear relationship over a wide concentration range of 17 nM to 962 nM (R=0.9997) for RAC and 42 nM to 1463 nM (R=0.9996) for SAL, with the detection limits of 11.6 nM for RAC and 23.0 nM for SAL. In addition, the developed MnO2/RGO@NF sensor was further investigated to detect RAC and SAL in pork samples, showing satisfied comparable results in comparison with analytic results from HPLC. PMID:26623510

  7. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  8. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    ERIC Educational Resources Information Center

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  9. Suboptimal Exposure to Facial Expressions When Viewing Video Messages From a Small Screen: Effects on Emotion, Attention, and Memory

    ERIC Educational Resources Information Center

    Ravaja, Niklas; Kallinen, Kari; Saari, Timo; Keltikangas-Jarvinen, Liisa

    2004-01-01

    The authors examined the effects of suboptimally presented facial expressions on emotional and attentional responses and memory among 39 young adults viewing video (business news) messages from a small screen. Facial electromyography (EMG) and respiratory sinus arrhythmia were used as physiological measures of emotion and attention, respectively.…

  10. 3D Organotypic Co-culture Model Supporting Medullary Thymic Epithelial Cell Proliferation, Differentiation and Promiscuous Gene Expression.

    PubMed

    Pinto, Sheena; Stark, Hans-Jürgen; Martin, Iris; Boukamp, Petra; Kyewski, Bruno

    2015-01-01

    Intra-thymic T cell development requires an intricate three-dimensional meshwork composed of various stromal cells, i.e., non-T cells. Thymocytes traverse this scaffold in a highly coordinated temporal and spatial order while sequentially passing obligatory check points, i.e., T cell lineage commitment, followed by T cell receptor repertoire generation and selection prior to their export into the periphery. The two major resident cell types forming this scaffold are cortical (cTECs) and medullary thymic epithelial cells (mTECs). A key feature of mTECs is the so-called promiscuous expression of numerous tissue-restricted antigens. These tissue-restricted antigens are presented to immature thymocytes directly or indirectly by mTECs or thymic dendritic cells, respectively resulting in self-tolerance. Suitable in vitro models emulating the developmental pathways and functions of cTECs and mTECs are currently lacking. This lack of adequate experimental models has for instance hampered the analysis of promiscuous gene expression, which is still poorly understood at the cellular and molecular level. We adapted a 3D organotypic co-culture model to culture ex vivo isolated mTECs. This model was originally devised to cultivate keratinocytes in such a way as to generate a skin equivalent in vitro. The 3D model preserved key functional features of mTEC biology: (i) proliferation and terminal differentiation of CD80(lo), Aire-negative into CD80(hi), Aire-positive mTECs, (ii) responsiveness to RANKL, and (iii) sustained expression of FoxN1, Aire and tissue-restricted genes in CD80(hi) mTECs. PMID:26275017

  11. Functionally relevant responses to human facial expressions of emotion in the domestic horse (Equus caballus).

    PubMed

    Smith, Amy Victoria; Proops, Leanne; Grounds, Kate; Wathan, Jennifer; McComb, Karen

    2016-02-01

    Whether non-human animals can recognize human signals, including emotions, has both scientific and applied importance, and is particularly relevant for domesticated species. This study presents the first evidence of horses' abilities to spontaneously discriminate between positive (happy) and negative (angry) human facial expressions in photographs. Our results showed that the angry faces induced responses indicative of a functional understanding of the stimuli: horses displayed a left-gaze bias (a lateralization generally associated with stimuli perceived as negative) and a quicker increase in heart rate (HR) towards these photographs. Such lateralized responses towards human emotion have previously only been documented in dogs, and effects of facial expressions on HR have not been shown in any heterospecific studies. Alongside the insights that these findings provide into interspecific communication, they raise interesting questions about the generality and adaptiveness of emotional expression and perception across species. PMID:26864784

  12. Asymmetrical facial expressions in portraits and hemispheric laterality: a literature review.

    PubMed

    Powell, W R; Schirillo, J A

    2009-11-01

    Studies of facial asymmetry have revealed that the left and the right sides of the face differ in emotional attributes. This paper reviews many of these distinctions to determine how these asymmetries influence portrait paintings. It does so by relating research involving emotional expression to aesthetic pleasantness in portraits. For example, facial expressions are often asymmetrical-the left side of the face is more emotionally expressive and more often connotes negative emotions than the right side. Interestingly, artists tend to expose more of their poser's left cheek than their right. This is significant, in that artists also portray more females than males with their left cheek exposed. Reasons for these psychological findings lead to explanations for the aesthetic leftward bias in portraiture. PMID:19214864

  13. Facial expression, size, and clutter: Inferences from movie structure to emotion judgments and back.

    PubMed

    Cutting, James E; Armstrong, Kacie L

    2016-04-01

    The perception of facial expressions and objects at a distance are entrenched psychological research venues, but their intersection is not. We were motivated to study them together because of their joint importance in the physical composition of popular movies-shots that show a larger image of a face typically have shorter durations than those in which the face is smaller. For static images, we explore the time it takes viewers to categorize the valence of different facial expressions as a function of their visual size. In two studies, we find that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter. More clutter creates crowding and impedes the interpretation of expressions for more distant faces but not proximal ones. Filmmakers at least tacitly know this. In two other studies, we show that contemporary movies lengthen shots that show smaller faces, and even more so with increased clutter. PMID:26728045

  14. A facial expression image database and norm for Asian population: a preliminary report

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  15. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  16. Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri

    2012-01-01

    Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…

  17. Discrimination and Imitation of Facial Expressions by Neonates.

    ERIC Educational Resources Information Center

    Field, Tiffany

    Findings of a series of studies on individual differences and maturational changes in expressivity at the neonatal stage and during early infancy are reported. Research results indicate that newborns are able to discriminate and imitate the basic emotional expressions: happy, sad, and surprised. Results show widened infant lips when the happy…

  18. Effects of simple and complex motion patterns on gene expression of chondrocytes seeded in 3D scaffolds.

    PubMed

    Grad, Sibylle; Gogolewski, Sylwester; Alini, Mauro; Wimmer, Markus A

    2006-11-01

    This study investigated the effect of unidirectional and multidirectional motion patterns on gene expression and molecule release of chondrocyte-seeded 3D scaffolds. Resorbable porous polyurethane scaffolds were seeded with bovine articular chondrocytes and exposed to dynamic compression, applied with a ceramic hip ball, alone (group 1), with superimposed rotation of the scaffold around its cylindrical axis (group 2), oscillation of the ball over the scaffold surface (group 3), or oscillation of ball and scaffold in phase difference (group 4). Compared with group 1, the proteoglycan 4 (PRG4) and cartilage oligomeric matrix protein (COMP) mRNA expression levels were markedly increased by ball oscillation (groups 3 and 4). Furthermore, the collagen type II mRNA expression was enhanced in the groups 3 and 4, while the aggrecan and tissue inhibitor of metalloproteinase-3 (TIMP-3) mRNA expression levels were upregulated by multidirectional articular motion (group 4). Ball oscillation (groups 3 and 4) also increased the release of PRG4, COMP, and hyaluronan (HA) into the culture media. This indicates that the applied stimuli can contribute to the maintenance of the chondrocytic phenotype of the cells. The mechanical effects causing cell stimulation by applied surface motion might be related to fluid film buildup and/or frictional shear at the scaffold-ball interface. It is suggested that the oscillating ball drags the fluid into the joint space, thereby causing biophysical effects similar to those of fluid flow. PMID:17518631

  19. The role of the cannabinoid receptor in adolescents' processing of facial expressions.

    PubMed

    Ewald, Anais; Becker, Susanne; Heinrich, Angela; Banaschewski, Tobias; Poustka, Luise; Bokde, Arun; Büchel, Christian; Bromberg, Uli; Cattrell, Anna; Conrod, Patricia; Desrivières, Sylvane; Frouin, Vincent; Papadopoulos-Orfanos, Dimitri; Gallinat, Jürgen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Ittermann, Bernd; Gowland, Penny; Paus, Tomáš; Martinot, Jean-Luc; Paillère Martinot, Marie-Laure; Smolka, Michael N; Vetter, Nora; Whelan, Rob; Schumann, Gunter; Flor, Herta; Nees, Frauke

    2016-01-01

    The processing of emotional faces is an important prerequisite for adequate social interactions in daily life, and might thus specifically be altered in adolescence, a period marked by significant changes in social emotional processing. Previous research has shown that the cannabinoid receptor CB1R is associated with longer gaze duration and increased brain responses in the striatum to happy faces in adults, yet, for adolescents, it is not clear whether an association between CBR1 and face processing exists. In the present study we investigated genetic effects of the two CB1R polymorphisms, rs1049353 and rs806377, on the processing of emotional faces in healthy adolescents. They participated in functional magnetic resonance imaging during a Faces Task, watching blocks of video clips with angry and neutral facial expressions, and completed a Morphed Faces Task in the laboratory where they looked at different facial expressions that switched from anger to fear or sadness or from happiness to fear or sadness, and labelled them according to these four emotional expressions. A-allele versus GG-carriers in rs1049353 displayed earlier recognition of facial expressions changing from anger to sadness or fear, but not for expressions changing from happiness to sadness or fear, and higher brain responses to angry, but not neutral, faces in the amygdala and insula. For rs806377 no significant effects emerged. This suggests that rs1049353 is involved in the processing of negative facial expressions with relation to anger in adolescence. These findings add to our understanding of social emotion-related mechanisms in this life period. PMID:26527537

  20. Comparison of facial expression in patients with obsessive-compulsive disorder and schizophrenia using the Facial Action Coding System: a preliminary study

    PubMed Central

    Bersani, Giuseppe; Bersani, Francesco Saverio; Valeriani, Giuseppe; Robiony, Maddalena; Anastasia, Annalisa; Colletti, Chiara; Liberati, Damien; Capra, Enrico; Quartini, Adele; Polli, Elisa

    2012-01-01

    Background Research shows that impairment in the expression and recognition of emotion exists in multiple psychiatric disorders. The objective of the current study was to evaluate the way that patients with schizophrenia and those with obsessive-compulsive disorder experience and display emotions in relation to specific emotional stimuli using the Facial Action Coding System (FACS). Methods Thirty individuals participated in the study, comprising 10 patients with schizophrenia, 10 with obsessive-compulsive disorder, and 10 healthy controls. All participants underwent clinical sessions to evaluate their symptoms and watched emotion-eliciting video clips while facial activity was videotaped. Congruent/incongruent feeling of emotions and facial expression in reaction to emotions were evaluated. Results Patients with schizophrenia and obsessive-compulsive disorder presented similarly incongruent emotive feelings and facial expressions (significantly worse than healthy participants). Correlations between the severity of psychopathological condition (in particular the severity of affective flattening) and impairment in recognition and expression of emotions were found. Discussion Patients with obsessive-compulsive disorder and schizophrenia seem to present a similarly relevant impairment in both experiencing and displaying of emotions; this impairment may be seen as a chronic consequence of the same neurodevelopmental origin of the two diseases. Mimic expression could be seen as a behavioral indicator of affective flattening. The FACS could be used as an objective way to evaluate clinical evolution in patients. PMID:23269872

  1. DNA vaccines expressing soluble CD4-envelope proteins fused to C3d elicit cross-reactive neutralizing antibodies to HIV-1

    SciTech Connect

    Bower, Joseph F.; Green, Thomas D.; Ross, Ted M. . E-mail: tmr15@pitt.edu

    2004-10-25

    DNA vaccines expressing the envelope (Env) of the human immunodeficiency virus type 1 (HIV-1) have been relatively ineffective at generating high-titer, long-lasting, neutralizing antibodies in a variety of animal models. In this study, DNA vaccines were constructed to express a fusion protein of the soluble human CD4 (sCD4) and the gp120 subunit of the HIV-1 envelope. To enhance the immunogenicity of the expressed fusion protein, three copies of the murine C3d (mC3d{sub 3}) were added to the carboxyl terminus of the complex. Monoclonal antibodies that recognize CD4-induced epitopes on gp120 efficiently bound to sCD4-gp120 or sCD4-gp120-mC3d{sub 3}. In addition, both sCD4-gp120 and sCD4-gp120-mC3d{sub 3} bound to cells expressing appropriate coreceptors in the absence of cell surface hCD4. Mice (BALB/c) vaccinated with DNA vaccines expressing either gp120-mC3d{sub 3} or sCD4-gp120-mC3d{sub 3} elicited antibodies that neutralized homologous virus infection. However, the use of sCD4-gp120-mC3d{sub 3}-DNA elicited the highest titers of neutralizing antibodies that persisted after depletion of anti-hCD4 antibodies. Interestingly, only mice vaccinated with DNA expressing sCD4-gp120-mC3d{sub 3} had antibodies that elicited cross-protective neutralizing antibodies. The fusion of sCD4 to the HIV-1 envelope exposes neutralizing epitopes that elicit broad protective immunity when the fusion complex is coupled with the molecular adjuvant, C3d.

  2. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    PubMed

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. PMID:22244587

  3. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    ERIC Educational Resources Information Center

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  4. Mixed emotions: alcoholics' impairments in the recognition of specific emotional facial expressions.

    PubMed

    Townshend, J M; Duka, T

    2003-01-01

    Facial expression recognition is a central feature of emotional and social behaviour and previous studies have found that alcoholics are impaired in this skill when presented with single emotions of differing intensities. The aim of this study was to explore biases in alcoholics' recognition of emotions when they were a mixture of two closely related emotions. The amygdala is intimately involved in encoding of emotions, especially those related to fear. In animals an increased number of withdrawals from alcohol leads to increased seizure sensitivity associated with facilitated transmission in the amygdala and related circuits. A further objective therefore was to explore the effect of previous alcohol detoxifications on the recognition of emotional facial expressions. Fourteen alcoholic inpatients were compared with 14 age and sex matched social drinking controls. They were asked to rate how much of each of six emotions (happiness, surprise, fear, sadness, disgust and anger) were present in morphed pictures portraying a mix of two of those emotions. The alcoholic group showed enhanced fear responses to all of the pictures compared to the controls and showed a different pattern of responding on anger and disgust. There were no differences between groups on decoding of sad, happy and surprised expressions. In addition the enhanced fear recognition found in the alcoholic group was related to the number of previous detoxifications. These results provide further evidence for impairment in facial expression recognition present in alcoholic patients. In addition, since the amygdala has been associated with the processing of facial expressions of emotion, particularly those of fear, the present data furthermore suggest that previous detoxifications may be related to changes within the amygdala. PMID:12631528

  5. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2004-12-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  6. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  7. Caregiver accuracy in detecting deception in facial expressions of pain in children.

    PubMed

    Boerner, Katelynn E; Chambers, Christine T; Craig, Kenneth D; Pillai Riddell, Rebecca R; Parker, Jennifer A

    2013-04-01

    Facial expressions provide a primary source of inference about a child's pain. Although facial expressions typically appear spontaneous, children have some capacity to fake or suppress displays of pain, thereby potentially misleading caregiver judgments. The present study was designed to compare accuracy of different groups of caregivers in detecting deception in children's facial expressions of pain when voluntarily controlled. Caregivers (15 pediatricians, 15 pediatric nurses, and 15 parents) viewed 48 video clips of children, 12 in each of 4 conditions (genuine pain, faked pain, suppressed pain, neutral baseline), and judged which condition was apparent to them. A 3 (group: pediatrician vs pediatric nurse vs parent)×4 (condition: genuine vs faked vs suppressed vs neutral) mixed analysis of variance (ANOVA) of judgment accuracies revealed a significant main effect of group, with nurses demonstrating higher overall accuracy scores than parents, and pediatricians not differing from either group. As well, all caregivers, regardless of group, demonstrated the lowest accuracy when viewing the genuine condition, relative to the faked and suppressed conditions, with accuracy for the neutral condition not differing significantly from the other conditions. Overall, caregivers were more successful at identifying faked and suppressed than genuine expressions of pain in children, and pediatric nurses fared better overall in judgment accuracy than parents. PMID:23375511

  8. Feature-based representations of emotional facial expressions in the human amygdala.

    PubMed

    Ahs, Fredrik; Davis, Caroline F; Gorka, Adam X; Hariri, Ahmad R

    2014-09-01

    The amygdala plays a central role in processing facial affect, responding to diverse expressions and features shared between expressions. Although speculation exists regarding the nature of relationships between expression- and feature-specific amygdala reactivity, this matter has not been fully explored. We used functional magnetic resonance imaging and principal component analysis (PCA) in a sample of 300 young adults, to investigate patterns related to expression- and feature-specific amygdala reactivity to faces displaying neutral, fearful, angry or surprised expressions. The PCA revealed a two-dimensional correlation structure that distinguished emotional categories. The first principal component separated neutral and surprised from fearful and angry expressions, whereas the second principal component separated neutral and angry from fearful and surprised expressions. This two-dimensional correlation structure of amygdala reactivity may represent specific feature-based cues conserved across discrete expressions. To delineate which feature-based cues characterized this pattern, face stimuli were averaged and then subtracted according to their principal component loadings. The first principal component corresponded to displacement of the eyebrows, whereas the second principal component corresponded to increased exposure of eye whites together with movement of the brow. Our results suggest a convergent representation of facial affect in the amygdala reflecting feature-based processing of discrete expressions. PMID:23887817

  9. Sequential dynamics of culturally moderated facial expressions of emotion.

    PubMed

    Matsumoto, David; Willingham, Bob; Olide, Andres

    2009-10-01

    There is consensus that when emotions are aroused, the displays of those emotions are either universal or culture-specific. We investigated the idea that an individual's emotional displays in a given context can be both universal and culturally variable, as they change over time. We examined the emotional displays of Olympic athletes across time, classified their expressive styles, and tested the association between those styles and a number of characteristics associated with the countries the athletes represented. Athletes from relatively urban, individualistic cultures expressed their emotions more, whereas athletes from less urban, collectivistic cultures masked their emotions more. These culturally influenced expressions occurred within a few seconds after initial, immediate, and universal emotional displays. Thus, universal and culture-specific emotional displays can unfold across time in an individual in a single context. PMID:19754526

  10. Perceptions of Emotion from Facial Expressions are Not Culturally Universal: Evidence from a Remote Culture

    PubMed Central

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-01-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to US emotion concepts embedded in experiments. Since such cues are standard feature in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the US and the Himba ethnic group sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed “universal” pattern, whereas US participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed “universal” pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts. PMID:24708506

  11. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    PubMed Central

    Saneiro, Mar; Salmeron-Majadas, Sergio

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support. PMID:24892055

  12. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  13. The influence of emotional facial expressions on gaze-following in grouped and solitary pedestrians

    PubMed Central

    Gallup, Andrew C.; Chong, Andrew; Kacelnik, Alex; Krebs, John R.; Couzin, Iain D.

    2014-01-01

    The mechanisms contributing to collective attention in humans remain unclear. Research indicates that pedestrians utilise the gaze direction of others nearby to acquire environmentally relevant information, but it is not known which, if any, additional social cues influence this transmission. Extending upon previous field studies, we investigated whether gaze cues paired with emotional facial expressions (neutral, happy, suspicious and fearsome) of an oncoming walking confederate modulate gaze-following by pedestrians moving in a natural corridor. We found that pedestrians walking alone were not sensitive to this manipulation, while individuals traveling together in groups did reliably alter their response in relation to emotional cues. In particular, members of a collective were more likely to follow gaze cues indicative of a potential threat (i.e., suspicious or fearful facial expression). This modulation of visual attention dependent on whether pedestrians are in social aggregates may be important to drive adaptive exploitation of social information, and particularly emotional stimuli within natural contexts. PMID:25052060

  14. Test battery for measuring the perception and recognition of facial expressions of emotion.

    PubMed

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  15. Memory for facial expression is influenced by the background music playing during study.

    PubMed

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities. PMID:22956988

  16. Geometric feature-based facial expression recognition in image sequences using multi-class AdaBoost and support vector machines.

    PubMed

    Ghimire, Deepak; Lee, Joonwhoan

    2013-01-01

    Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+) facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively. PMID:23771158

  17. Human growth factor and cytokine skin cream for facial skin rejuvenation as assessed by 3D in vivo optical skin imaging.

    PubMed

    Gold, Michael H; Goldman, Mitchel P; Biron, Julie

    2007-10-01

    Growth factors, in addition to their crucial role in cutaneous wound healing, are also beneficial for skin rejuvenation. Due to their multifunctional activities such as promoting skin cell proliferation and stimulating collagen formation, growth factors may participate in skin rejuvenation at various levels. The present placebo-controlled study aimed to further investigate the antiaging effects of a novel skin cream containing a mixture of human growth factors and cytokines, which was obtained through a biotechnology process using cultured human fetal fibroblasts. Aside from clinical assessment of skin wrinkles, the skin surface topography was analyzed by 3D in vivo optical skin imaging using the Phaseshift Rapid in vivo Measurement of Skin (PRIMOS) device. This device allows fast, contact-free, and direct measurement of the skin surface topography in vivo at high resolution. This technique is quantitative and more reliable than a visual assessment of wrinkles using a scoring system, which is subjective and strongly dependent on investigator and assessment conditions. Using the PRIMOS device, which is also regarded as a more accurate method than the commonly used silicon replica technique, skin surface roughness was shown to significantly decrease between 10% and 18% depending on the roughness parameter after 2 months of twice-daily application of the human growth factor and cytokine cream. This was compared to treatment with the placebo formulation resulting in an approximate 10% decrease of 2 roughness parameters, whereas the remaining parameters remained unchanged. We found that topical application of growth factors and cytokines are beneficial in reducing signs of skin aging. PMID:17966179

  18. Alexithymia, not autism, predicts poor recognition of emotional facial expressions.

    PubMed

    Cook, Richard; Brewer, Rebecca; Shah, Punit; Bird, Geoffrey

    2013-05-01

    Despite considerable research into whether face perception is impaired in autistic individuals, clear answers have proved elusive. In the present study, we sought to determine whether co-occurring alexithymia (characterized by difficulties interpreting emotional states) may be responsible for face-perception deficits previously attributed to autism. Two experiments were conducted using psychophysical procedures to determine the relative contributions of alexithymia and autism to identity and expression recognition. Experiment 1 showed that alexithymia correlates strongly with the precision of expression attributions, whereas autism severity was unrelated to expression-recognition ability. Experiment 2 confirmed that alexithymia is not associated with impaired ability to detect expression variation; instead, results suggested that alexithymia is associated with difficulties interpreting intact sensory descriptions. Neither alexithymia nor autism was associated with biased or imprecise identity attributions. These findings accord with the hypothesis that the emotional symptoms of autism are in fact due to co-occurring alexithymia and that existing diagnostic criteria may need to be revised. PMID:23528789

  19. Affective State Level Recognition in Naturalistic Facial and Vocal Expressions.

    PubMed

    Meng, Hongying; Bianchi-Berthouze, Nadia

    2014-03-01

    Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition. PMID:23757552

  20. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex.

    PubMed

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-02-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2×2×2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions. PMID:23142071

  1. Facial expressions and EEG in infants of intrusive and withdrawn mothers with depressive symptoms.

    PubMed

    Diego, Miguel A; Field, Tiffany; Hart, Sybil; Hernandez-Reif, Maria; Jones, Nancy; Cullen, Christy; Schanberg, Saul; Kuhn, Cynthia

    2002-01-01

    When intrusive and withdrawn mothers with depressive symptoms modeled happy, surprised, and sad expressions, their 3-month-old infants did not differentially respond to these expressions or show EEG changes. When a stranger modeled these expressions, the infants of intrusive vs. withdrawn mothers looked more at the surprised and sad expressions and showed greater relative right EEG activity in response to the surprise and sad expressions as compared to the happy expressions. These findings suggest that the infants of intrusive mothers with depressive symptoms showed more differential responding to the facial expressions than the infants of withdrawn mothers. In addition, the infants of intrusive vs. infants of withdrawn mothers showed increased salivary cortisol following the interactions, suggesting that they were more stressed by the interactions. PMID:11816047

  2. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    PubMed

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed. PMID:26316289

  3. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children

    PubMed Central

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano

    2015-01-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651

  4. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children.

    PubMed

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I; Carrubba, Stefano

    2015-05-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children's performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction 'gender x face region' only for anger and neutral emotions. PMID:27247651

  5. Recognition of facial expressions by alcoholic patients: a systematic literature review

    PubMed Central

    Donadon, Mariana Fortunata; Osório, Flávia de Lima

    2014-01-01

    Background Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics’ recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time. Methods A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology. Results The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger. Conclusion The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed. PMID:25228806

  6. Age-Related Response Bias in the Decoding of Sad Facial Expressions

    PubMed Central

    Fölster, Mara; Hess, Ursula; Hühnel, Isabell; Werheid, Katja

    2015-01-01

    Recent studies have found that age is negatively associated with the accuracy of decoding emotional facial expressions; this effect of age was found for actors as well as for raters. Given that motivational differences and stereotypes may bias the attribution of emotion, the aim of the present study was to explore whether these age effects are due to response bias, that is, the unbalanced use of response categories. Thirty younger raters (19–30 years) and thirty older raters (65–81 years) viewed video clips of younger and older actors representing the same age ranges, and decoded their facial expressions. We computed both raw hit rates and bias-corrected hit rates to assess the influence of potential age-related response bias on decoding accuracy. Whereas raw hit rates indicated significant effects of both the actors’ and the raters’ ages on decoding accuracy for sadness, these age effects were no longer significant when response bias was corrected. Our results suggest that age effects on the accuracy of decoding facial expressions may be due, at least in part, to age-related response bias. PMID:26516920

  7. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  8. Cloning, 3D modeling and expression analysis of three vacuolar invertase genes from cassava (Manihot Esculenta Crantz).

    PubMed

    Yao, Yuan; Wu, Xiao-Hui; Geng, Meng-Ting; Li, Rui-Mei; Liu, Jiao; Hu, Xin-Wen; Guo, Jian-Chun

    2014-01-01

    Vacuolar invertase is one of the key enzymes in sucrose metabolism that irreversibly catalyzes the hydrolysis of sucrose to glucose and fructose in plants. In this research, three vacuolar invertase genes, named MeVINV1-3, and with 653, 660 and 639 amino acids, respectively, were cloned from cassava. The motifs of NDPNG (β-fructosidase motif), RDP and WECVD, which are conserved and essential for catalytic activity in the vacuolar invertase family, were found in MeVINV1 and MeVINV2. Meanwhile, in MeVINV3, instead of NDPNG we found the motif NGPDG, in which the three amino acids GPD are different from those in other vacuolar invertases (DPN) that might result in MeVINV3 being an inactivated protein. The N-terminal leader sequence of MeVINVs contains a signal anchor, which is associated with the sorting of vacuolar invertase to vacuole. The overall predicted 3D structure of the MeVINVs consists of a five bladed β-propeller module at N-terminus domain, and forms a β-sandwich module at the C-terminus domain. The active site of the protein is situated in the β-propeller module. MeVINVs are classified in two subfamilies, α and β groups, in which α group members of MeVINV1 and 2 are highly expressed in reproductive organs and tuber roots (considered as sink organs), while β group members of MeVINV3 are highly expressed in leaves (source organs). All MeVINVs are highly expressed in leaves, while only MeVINV1 and 2 are highly expressed in tubers at cassava tuber maturity stage. Thus, MeVINV1 and 2 play an important role in sucrose unloading and starch accumulation, as well in buffering the pools of sucrose, hexoses and sugar phosphates in leaves, specifically at later stages of plant development. PMID:24838076

  9. Do Infants Show Distinct Negative Facial Expressions for Fear and Anger? Emotional Expression in 11-Month-Old European American, Chinese, and Japanese Infants

    ERIC Educational Resources Information Center

    Camras, Linda A.; Oster, Harriet; Bakeman, Roger; Meng, Zhaolan; Ujiie, Tatsuo; Campos, Joseph J.

    2007-01-01

    Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11-month-olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants'…

  10. Visual field bias in hearing and deaf adults during judgments of facial expression and identity.

    PubMed

    Letourneau, Susan M; Mitchell, Teresa V

    2013-01-01

    The dominance of the right hemisphere during face perception is associated with more accurate judgments of faces presented in the left rather than the right visual field (RVF). Previous research suggests that the left visual field (LVF) bias typically observed during face perception tasks is reduced in deaf adults who use sign language, for whom facial expressions convey important linguistic information. The current study examined whether visual field biases were altered in deaf adults whenever they viewed expressive faces, or only when attention was explicitly directed to expression. Twelve hearing adults and 12 deaf signers were trained to recognize a set of novel faces posing various emotional expressions. They then judged the familiarity or emotion of faces presented in the left or RVF, or both visual fields simultaneously. The same familiar and unfamiliar faces posing neutral and happy expressions were presented in the two tasks. Both groups were most accurate when faces were presented in both visual fields. Across tasks, the hearing group demonstrated a bias toward the LVF. In contrast, the deaf group showed a bias toward the LVF during identity judgments that shifted marginally toward the RVF during emotion judgments. Two secondary conditions tested whether these effects generalized to angry faces and famous faces and similar effects were observed. These results suggest that attention to facial expression, not merely the presence of emotional expression, reduces a typical LVF bias for face processing in deaf signers. PMID:23761774

  11. Upregulations of metallothionein gene expressions and tolerance to heavy metal toxicity by three dimensional cultivation of HepG2 cells on VECELL 3-D inserts.

    PubMed

    Kubo, Takashi; Kuroda, Yukie; Horiuchi, Shinichiro; Kim, Su-Ryang; Sekino, Yuko; Ishida, Seiichi

    2016-02-01

    The VECELL 3-D insert is a new culture scaffold consisting of collagen-coated ePTFE (expanded polytetrafluoroethylene) mesh. We analyzed the effects of VECELL 3-D inserts on the functionality of HepG2, a human hepatocellular carcinoma cell line. HepG2 cells cultured on VECELL 3-D inserts maintained a round shape, while those cultured on a standard culture plate or collagen-coated cell culture plate showed a flattened and cubic epithelial-like shape. HepG2 cells cultured on VECELL 3-D inserts had showed upregulated expression of metallothionein genes and in turn a higher tolerance to toxicity induced by heavy metals. These results suggest that HepG2 cell functions were changed by the cell morphology that is induced by culturing on a VECELL 3-D insert. PMID:26763402

  12. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation. PMID:25449164

  13. Facial expression of fear in the context of human ethology: Recognition advantage in the perception of male faces.

    PubMed

    Trnka, Radek; Tavel, Peter; Tavel, Peter; Hasto, Jozef

    2015-01-01

    Facial expression is one of the core issues in the ethological approach to the study of human behaviour. This study discusses sex-specific aspects of the recognition of the facial expression of fear using results from our previously published experimental study. We conducted an experiment in which 201 participants judged seven different facial expressions: anger, contempt, disgust, fear, happiness, sadness and surprise (Trnka et al. 2007). Participants were able to recognize the facial expression of fear significantly better on a male face than on a female face. Females also recognized fear generally better than males. The present study provides a new interpretation of this sex difference in the recognition of fear. We interpret these results within the paradigm of human ethology, taking into account the adaptive function of the facial expression of fear. We argue that better detection of fear might be crucial for females under a situation of serious danger in groups of early hominids. The crucial role of females in nurturing and protecting offspring was fundamental for the reproductive potential of the group. A clear decoding of this alarm signal might thus have enabled the timely preparation of females for escape or defence to protect their health for successful reproduction. Further, it is likely that males played the role of guardians of social groups and that they were responsible for effective warnings of the group under situations of serious danger. This may explain why the facial expression of fear is better recognizable on the male face than on the female face. PMID:26071575

  14. Altered representation of facial expressions after early visual deprivation

    PubMed Central

    Gao, Xiaoqing; Maurer, Daphne; Nishimura, Mayu

    2013-01-01

    We investigated the effects of early visual deprivation on the underlying representation of the six basic emotions. Using multi-dimensional scaling (MDS), we compared the similarity judgments of adults who had missed early visual input because of bilateral congenital cataracts to control adults with normal vision. Participants made similarity judgments of the six basic emotional expressions, plus neutral, at three different intensities. Consistent with previous studies, the similarity judgments of typical adults could be modeled with four underlying dimensions, which can be interpreted as representing pleasure, arousal, potency and intensity of expressions. As a group, cataract-reversal patients showed a systematic structure with dimensions representing pleasure, potency, and intensity. However, an arousal dimension was not obvious in the patient group's judgments. Hierarchical clustering analysis revealed a pattern in patients seen in typical 7-year-olds but not typical 14-year-olds or adults. There was also more variability among the patients than among the controls, as evidenced by higher stress values for the MDS fit to the patients' data and more dispersed weightings on the four dimensions. The findings suggest an important role for early visual experience in shaping the later development of the representations of emotions. Since the normal underlying structure for emotion emerges postnatally and continues to be refined until late childhood, the altered representation of emotion in adult patients suggests a sleeper effect. PMID:24312071

  15. Exploring Combinations of Different Color and Facial Expression Stimuli for Gaze-Independent BCIs

    PubMed Central

    Chen, Long; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2016-01-01

    Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that

  16. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. PMID:25967084

  17. Processing of facial emotional expression: spatio-temporal data as assessed by scalp event-related potentials.

    PubMed

    Krolak-Salmon, P; Fischer, C; Vighetto, A; Mauguière, F

    2001-03-01

    Event-related potentials (ERPs) were recorded in 10 adult volunteers, who were asked to view pictures of faces with different emotional expressions, i.e. fear, happiness, disgust, surprise and neutral expression [Ekman, P. & Friesen, W.V. (1975). Pictures of Facial Affect. Consulting Psychologist Press, Palo Alto, CA]. ERPs were recorded during two different tasks with the same stimuli. Firstly, subjects were instructed to pay attention to the gender of the faces by counting males or females. Secondly, they had to focus on facial expressions by counting faces who looked surprised. The classical scalp 'face-related potentials', i.e. a vertex-positive potential and a bilateral temporal negativity, were recorded 150 ms after the stimulus onset. Significant differences were found, firstly between late-latency ERPs to emotional faces and to neutral faces, between 250 and 550 ms of latency and, secondly, among the ERPs to the different facial expressions between 550 and 750 ms of latency. These differences appeared only during the expression discrimination task, not during the gender discrimination task. Topographic maps of these differences showed a specific right temporal activity related to each emotional expression, some particularities being observed for each expression. This study provides new data concerning the spatio-temporal features of facial expression processing, particularly a late-latency activity related to specific attention to facial expressions. PMID:11264671

  18. Affective engagement for facial expressions and emotional scenes: The influence of social anxiety

    PubMed Central

    Wangelin, Bethany C.; Bradley, Margaret M.; Kastner, Anna; Lang, Peter J.

    2012-01-01

    Pictures of emotional facial expressions or natural scenes are often used as cues in emotion research. We examined the extent to which these different stimuli engage emotion and attention, and whether the presence of social anxiety symptoms influences responding to facial cues. Sixty participants reporting high or low social anxiety viewed pictures of angry, neutral, and happy faces, as well as violent, neutral, and erotic scenes, while skin conductance and event-related potentials were recorded. Acoustic startle probes were presented throughout picture viewing, and blink magnitude, probe P3 and reaction time to the startle probe also were measured. Results indicated that viewing emotional scenes prompted strong reactions in autonomic, central, and reflex measures, whereas pictures of faces were generally weak elicitors of measurable emotional response. However, higher social anxiety was associated with modest electrodermal changes when viewing angry faces and mild startle potentiation when viewing either angry or smiling faces, compared to neutral. Taken together, pictures of facial expressions do not strongly engage fundamental affective reactions, but these cues appeared to be effective in distinguishing between high and low social anxiety participants, supporting their use in anxiety research. PMID:22643041

  19. Differential conditioning to facial emotional expressions: effects of hemispheric asymmetries and CS identification.

    PubMed

    Peper, M; Karcher, S

    2001-11-01

    Previous studies on aversive learning have suggested a right hemispheric advantage for eliciting autonomic reactions to a masked conditioned facial stimulus (CS) depicting anger. The present study investigated the effects of visual field (VF), stimulus awareness, and emotional valence of the CSs on indicators of conditioning (bilateral SCRs, HR) using a differential conditioning paradigm (N = 41). In Group 1, four different negatively valenced facial expressions (CS+) but not four positively valenced CS- were associated with an unconditioned stimulus (US, aversive vocalization, 97 dB, 3 s) during acquisition. Group 2 received a treatment reversal with positive CS+ associated with the US. In a repeated measures design, CSs were presented with or without awareness during extinction (two weeks interval, order counterbalanced). SOAs were adapted for each subject and condition prior to the experiment so that identification performance was approaching chance level. The results revealed that both negative and positive facial expressions could be aversively conditioned providing evidence for a generalization of learning in the valence dimension. During extinction, preattentive negative CS+ presented to the left VF showed a trend towards greater electrodermal and cardiac reactions. However, no such effect emerged under full awareness of the CSs. These results confirm and further specify the nature of hemispheric asymmetries in emotional associative learning. PMID:12240670

  20. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    PubMed Central

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two six-month-old/mother dyads who each engaged in a face-to-face interaction. Automated measurements showed high associations with anatomically based manual coding (concurrent validity); measurements of smiling showed high associations with mean ratings of positive emotion made by naive observers (construct validity). For both infants and mothers, smile strength and eye constriction (the Duchenne marker) were correlated over time, creating a continuous index of smile intensity. Infant and mother smile activity exhibited changing (nonstationary) local patterns of association, suggesting the dyadic repair and dissolution of states of affective synchrony. The study provides insights into the potential and limitations of automated measurement of facial action. PMID:19885384

  1. Smile to see the forest: Facially expressed positive emotions broaden cognition.

    PubMed

    Johnson, Kareem J; Waugh, Christian E; Fredrickson, Barbara L

    2010-02-19

    The broaden hypothesis, part of Fredrickson's (1998, 2001) broaden-and-build theory, proposes that positive emotions lead to broadened cognitive states. Here, we present evidence that cognitive broadening can be produced by frequent facial expressions of positive emotion. Additionally, we present a novel method of using facial electromyography (EMG) to discriminate between Duchenne (genuine) and non-Duchenne (non-genuine) smiles. Across experiments, Duchenne smiles occurred more frequently during positive emotion inductions than neutral or negative inductions. Across experiments, Duchenne smiles correlated with self-reports of specific positive emotions. In Experiment 1, high frequencies of Duchenne smiles predicted increased attentional breadth on a global-local visual processing task. In Experiment 2, high frequencies of Duchenne smiles predicted increased attentional flexibility on a covert attentional orienting task. These data underscore the value of using multiple methods to measure emotional experience in studies of emotion and cognition. PMID:23275681

  2. Facial expression reconstruction on the basis of selected vertices of triangle mesh

    NASA Astrophysics Data System (ADS)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.

  3. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study

    PubMed Central

    Fujiwara, Takeo; Mizuki, Rie; Miki, Takahiro; Chemtob, Claude

    2015-01-01

    “Emotional numbing” is a symptom of post-traumatic stress disorder (PTSD) characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes (‘baseline video’) followed by a 2-min video clip from a television comedy (‘comedy video’). Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System (FACS). We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression (p < 0.05). This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters. PMID:26528206

  4. Attentional control and interpretation of facial expression after oxytocin administration to typically developed male adults.

    PubMed

    Hirosawa, Tetsu; Kikuchi, Mitsuru; Okumura, Eiichi; Yoshimura, Yuko; Hiraishi, Hirotoshi; Munesue, Toshio; Takesaki, Natsumi; Furutani, Naoki; Ono, Yasuki; Higashida, Haruhiro; Minabe, Yoshio

    2015-01-01

    Deficits in attentional-inhibitory control have been reported to correlate to anger, hostility, and aggressive behavior; therefore, inhibitory control appears to play an important role in prosocial behavior. Moreover, recent studies have demonstrated that oxytocin (OT) exerts a prosocial effect (e.g., decreasing negative behaviors, such as aggression) on humans. However, it is unknown whether the positively valenced effect of OT on sociality is associated with enhanced attentional-inhibitory control. In the present study, we hypothesized that OT enhances attentional-inhibitory control and that the positively valenced effect of OT on social cognition is associated with enhanced attentional-inhibitory control. In a single-blind, placebo-controlled crossover trial, we tested this hypothesis using 20 healthy male volunteers. We considered a decrease in the hostility detection ratio, which reflects the positively valenced interpretation of other individuals' facial expressions, to be an index of the positively valenced effects of OT (we reused the results of our previously published study). As a measure of attentional-inhibitory control, we employed a modified version of the flanker task (i.e., a shorter conflict duration indicated higher inhibitory control). These results failed to demonstrate any significant behavioral effects of OT (i.e., neither a positively valenced effect on facial cognition nor an effect on attentional-inhibitory control). However, the enhancement of attentional-inhibitory control after OT administration significantly correlated to the positively valenced effects on the interpretation of uncertain facial cognition (i.e., neutral and ambiguous facial expressions). PMID:25659131

  5. Attentional Control and Interpretation of Facial Expression after Oxytocin Administration to Typically Developed Male Adults

    PubMed Central

    Hirosawa, Tetsu; Kikuchi, Mitsuru; Okumura, Eiichi; Yoshimura, Yuko; Hiraishi, Hirotoshi; Munesue, Toshio; Takesaki, Natsumi; Furutani, Naoki; Ono, Yasuki; Higashida, Haruhiro; Minabe, Yoshio

    2015-01-01

    Deficits in attentional-inhibitory control have been reported to correlate to anger, hostility, and aggressive behavior; therefore, inhibitory control appears to play an important role in prosocial behavior. Moreover, recent studies have demonstrated that oxytocin (OT) exerts a prosocial effect (e.g., decreasing negative behaviors, such as aggression) on humans. However, it is unknown whether the positively valenced effect of OT on sociality is associated with enhanced attentional-inhibitory control. In the present study, we hypothesized that OT enhances attentional-inhibitory control and that the positively valenced effect of OT on social cognition is associated with enhanced attentional-inhibitory control. In a single-blind, placebo-controlled crossover trial, we tested this hypothesis using 20 healthy male volunteers. We considered a decrease in the hostility detection ratio, which reflects the positively valenced interpretation of other individuals’ facial expressions, to be an index of the positively valenced effects of OT (we reused the results of our previously published study). As a measure of attentional-inhibitory control, we employed a modified version of the flanker task (i.e., a shorter conflict duration indicated higher inhibitory control). These results failed to demonstrate any significant behavioral effects of OT (i.e., neither a positively valenced effect on facial cognition nor an effect on attentional-inhibitory control). However, the enhancement of attentional-inhibitory control after OT administration significantly correlated to the positively valenced effects on the interpretation of uncertain facial cognition (i.e., neutral and ambiguous facial expressions). PMID:25659131

  6. Why Do Fearful Facial Expressions Elicit Behavioral Approach? Evidence From a Combined Approach-Avoidance Implicit Association Test

    PubMed Central

    Hammer, Jennifer L.; Marsh, Abigail A.

    2015-01-01

    Despite communicating a “negative” emotion, fearful facial expressions predominantly elicit behavioral approach from perceivers. It has been hypothesized that this seemingly paradoxical effect may occur due to fearful expressions’ resemblance to vulnerable, infantile faces. However, this hypothesis has not yet been tested. We used a combined approach-avoidance/implicit association test (IAT) to test this hypothesis. Participants completed an approach-avoidance lever task during which they responded to fearful and angry facial expressions as well as neutral infant and adult faces presented in an IAT format. Results demonstrated an implicit association between fearful facial expressions and infant faces and showed that both fearful expressions and infant faces primarily elicit behavioral approach. The dominance of approach responses to both fearful expressions and infant faces decreased as a function of psychopathic personality traits. Results suggest that the prosocial responses to fearful expressions observed in most individuals may stem from their associations with infantile faces. PMID:25603135

  7. Facial Expression Aftereffect Revealed by Adaption to Emotion-Invisible Dynamic Bubbled Faces

    PubMed Central

    Luo, Chengwen; Wang, Qingyun; Schyns, Philippe G.; Kingdom, Frederick A. A.; Xu, Hong

    2015-01-01

    Visual adaptation is a powerful tool to probe the short-term plasticity of the visual system. Adapting to local features such as the oriented lines can distort our judgment of subsequently presented lines, the tilt aftereffect. The tilt aftereffect is believed to be processed at the low-level of the visual cortex, such as V1. Adaptation to faces, on the other hand, can produce significant aftereffects in high-level traits such as identity, expression, and ethnicity. However, whether face adaptation necessitate awareness of face features is debatable. In the current study, we investigated whether facial expression aftereffects (FEAE) can be generated by partially visible faces. We first generated partially visible faces using the bubbles technique, in which the face was seen through randomly positioned circular apertures, and selected the bubbled faces for which the subjects were unable to identify happy or sad expressions. When the subjects adapted to static displays of these partial faces, no significant FEAE was found. However, when the subjects adapted to a dynamic video display of a series of different partial faces, a significant FEAE was observed. In both conditions, subjects could not identify facial expression in the individual adapting faces. These results suggest that our visual system is able to integrate unrecognizable partial faces over a short period of time and that the integrated percept affects our judgment on subsequently presented faces. We conclude that FEAE can be generated by partial face with little facial expression cues, implying that our cognitive system fills-in the missing parts during adaptation, or the subcortical structures are activated by the bubbled faces without conscious recognition of emotion during adaptation. PMID:26717572

  8. Diminished facial emotion expression and associated clinical characteristics in Anorexia Nervosa.

    PubMed

    Lang, Katie; Larsson, Emma E C; Mavromara, Liza; Simic, Mima; Treasure, Janet; Tchanturia, Kate

    2016-02-28

    This study aimed to investigate emotion expression in a large group of children, adolescents and adults with Anorexia Nervosa (AN), and investigate the associated clinical correlates. One hundred and forty-one participants (AN=66, HC= 75) were recruited and positive and negative film clips were used to elicit emotion expressions. The Facial Activation Coding system (FACES) was used to code emotion expression. Subjective ratings of emotion were collected. Individuals with AN displayed less positive emotions during the positive film clip compared to healthy controls (HC). There was no significant difference between the groups on the Positive and Negative Affect Scale (PANAS). The AN group displayed emotional incongruence (reporting a different emotion to what would be expected given the stimuli, with limited facial affect to signal the emotion experienced), whereby they reported feeling significantly higher rates of negative emotion during the positive clip. There were no differences in emotion expression between the groups during the negative film clip. Despite this individuals with AN reported feeling significantly higher levels of negative emotions during the negative clip. Diminished positive emotion expression was associated with more severe clinical symptoms, which could suggest that these individuals represent a group with serious social difficulties, which may require specific attention in treatment. PMID:26778369

  9. Working memory and the identification of facial expression in patients with left frontal glioma.

    PubMed

    Mu, Yong-Gao; Huang, Ling-Juan; Li, Shi-Yun; Ke, Chao; Chen, Yu; Jin, Yu; Chen, Zhong-Ping

    2012-09-01

    Patients with brain tumors may have cognitive dysfunctions including memory deterioration, such as working memory, that affect quality of life. This study was to explore the presence of defects in working memory and the identification of facial expressions in patients with left frontal glioma. This case-control study recruited 11 matched pairs of patients and healthy control subjects (mean age ± standard deviation, 37.00 ± 10.96 years vs 36.73 ± 11.20 years; 7 male and 4 female) from March through December 2011. The psychological tests contained tests that estimate verbal/visual-spatial working memory, executive function, and the identification of facial expressions. According to the paired samples analysis, there were no differences in the anxiety and depression scores or in the intelligence quotients between the 2 groups (P > .05). All indices of the Digit Span Test were significantly worse in patients than in control subjects (P < .05), but the Tapping Test scores did not differ between patient and control groups. Of all 7 Wisconsin Card Sorting Test (WCST) indexes, only the Preservative Response was significantly different between patients and control subjects (P < .05). Patients were significantly less accurate in detecting angry facial expressions than were control subjects (30.3% vs 57.6%; P < .05) but showed no deficits in the identification of other expressions. The backward indexes of the Digit Span Test were associated with emotion scores and tumor size and grade (P < .05). Patients with left frontal glioma had deficits in verbal working memory and the ability to identify anger. These may have resulted from damage to functional frontal cortex regions, in which roles in these 2 capabilities have not been confirmed. However, verbal working memory performance might be affected by emotional and tumor-related factors. PMID:23095835

  10. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  11. Perception of emotions from facial expressions in high-functioning adults with autism

    PubMed Central

    Kennedy, Daniel P.; Adolphs, Ralph

    2012-01-01

    Impairment in social communication is one of the diagnostic hallmarks of autism spectrum disorders, and a large body of research has documented aspects of impaired social cognition in autism, both at the level of the processes and the neural structures involved. Yet one of the most common social communicative abilities in everyday life, the ability to judge somebody's emotion from their facial expression, has yielded conflicting findings. To investigate this issue, we used a sensitive task that has been used to assess facial emotion perception in a number of neurological and psychiatric populations. Fifteen high- functioning adults with autism and 19 control participants rated the emotional intensity of 36 faces displaying basic emotions. Every face was rated 6 times - once for each emotion category. The autism group gave ratings that were significantly less sensitive to a given emotion, and less reliable across repeated testing, resulting in overall decreased specificity in emotion perception. We thus demonstrate a subtle but specific pattern of impairments in facial emotion perception in people with autism. PMID:23022433

  12. Space-by-time manifold representation of dynamic facial expressions for emotion categorization.

    PubMed

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E; Garrod, Oliver G B; Panzeri, Stefano; Schyns, Philippe G

    2016-06-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  13. Influence of gender in the recognition of basic facial expressions: A critical literature review

    PubMed Central

    Forni-Santos, Larissa; Osório, Flávia L

    2015-01-01

    AIM: To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. METHODS: We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. RESULTS: In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. CONCLUSION: The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer’s gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation. PMID:26425447

  14. ANS Responses and Facial Expressions Differentiate between the Taste of Commercial Breakfast Drinks

    PubMed Central

    de Wijk, René A.; He, Wei; Mensink, Manon G. J.; Verhoeven, Rob H. G.; de Graaf, Cees

    2014-01-01

    The high failure rate of new market introductions, despite initial successful testing with traditional sensory and consumer tests, necessitates the development of other tests. This study explored the ability of selected physiological and behavioral measures of the autonomic nervous system (ANS) to distinguish between repeated exposures to foods from a single category (breakfast drinks) and with similar liking ratings. In this within-subject study 19 healthy young adults sipped from five breakfast drinks, each presented five times, while ANS responses (heart rate, skin conductance response and skin temperature), facial expressions, liking, and intensities were recorded. The results showed that liking was associated with increased heart rate and skin temperature, and more neutral facial expressions. Intensity was associated with reduced heart rate and skin temperature, more neutral expressions and more negative expressions of sadness, anger and surprise. Strongest associations with liking were found after 1 second of tasting, whereas strongest associations with intensity were found after 2 seconds of tasting. Future studies should verify the contribution of the additional information to the prediction of market success. PMID:24714107

  15. The Effect of Secure Attachment State and Infant Facial Expressions on Childless Adults' Parental Motivation.

    PubMed

    Ding, Fangyuan; Zhang, Dajun; Cheng, Gang

    2016-01-01

    This study examined the association between infant facial expressions and parental motivation as well as the interaction between attachment state and expressions. Two-hundred eighteen childless adults (M age = 19.22, 118 males, 100 females) were recruited. Participants completed the Chinese version of the State Adult Attachment Measure and the E-prime test, which comprised three components (a) liking, the specific hedonic experience in reaction to laughing, neutral, and crying infant faces; (b) representational responding, actively seeking infant faces with specific expressions; and (c) evoked responding, actively retaining images of three different infant facial expressions. While the first component refers to the "liking" of infants, the second and third components entail the "wanting" of an infant. Random intercepts multilevel models with emotion nested within participants revealed a significant interaction between secure attachment state and emotion on both liking and representational response. A hierarchical regression analysis was conducted to examine the unique contributions of secure attachment state. Findings demonstrated that, after controlling for sex, anxious, and avoidant, secure attachment state positively predicted parental motivations (liking and wanting) in the neutral and crying conditions, but not the laughing condition. These findings demonstrate the significant role of secure attachment state in parental motivation, specifically when infants display uncertain and negative emotions. PMID:27582724

  16. ANS responses and facial expressions differentiate between the taste of commercial breakfast drinks.

    PubMed

    de Wijk, René A; He, Wei; Mensink, Manon G J; Verhoeven, Rob H G; de Graaf, Cees

    2014-01-01

    The high failure rate of new market introductions, despite initial successful testing with traditional sensory and consumer tests, necessitates the development of other tests. This study explored the ability of selected physiological and behavioral measures of the autonomic nervous system (ANS) to distinguish between repeated exposures to foods from a single category (breakfast drinks) and with similar liking ratings. In this within-subject study 19 healthy young adults sipped from five breakfast drinks, each presented five times, while ANS responses (heart rate, skin conductance response and skin temperature), facial expressions, liking, and intensities were recorded. The results showed that liking was associated with increased heart rate and skin temperature, and more neutral facial expressions. Intensity was associated with reduced heart rate and skin temperature, more neutral expressions and more negative expressions of sadness, anger and surprise. Strongest associations with liking were found after 1 second of tasting, whereas strongest associations with intensity were found after 2 seconds of tasting. Future studies should verify the contribution of the additional information to the prediction of market success. PMID:24714107

  17. The Effect of Secure Attachment State and Infant Facial Expressions on Childless Adults’ Parental Motivation

    PubMed Central

    Ding, Fangyuan; Zhang, Dajun; Cheng, Gang

    2016-01-01

    This study examined the association between infant facial expressions and parental motivation as well as the interaction between attachment state and expressions. Two-hundred eighteen childless adults (Mage = 19.22, 118 males, 100 females) were recruited. Participants completed the Chinese version of the State Adult Attachment Measure and the E-prime test, which comprised three components (a) liking, the specific hedonic experience in reaction to laughing, neutral, and crying infant faces; (b) representational responding, actively seeking infant faces with specific expressions; and (c) evoked responding, actively retaining images of three different infant facial expressions. While the first component refers to the “liking” of infants, the second and third components entail the “wanting” of an infant. Random intercepts multilevel models with emotion nested within participants revealed a significant interaction between secure attachment state and emotion on both liking and representational response. A hierarchical regression analysis was conducted to examine the unique contributions of secure attachment state. Findings demonstrated that, after controlling for sex, anxious, and avoidant, secure attachment state positively predicted parental motivations (liking and wanting) in the neutral and crying conditions, but not the laughing condition. These findings demonstrate the significant role of secure attachment state in parental motivation, specifically when infants display uncertain and negative emotions. PMID:27582724

  18. Influence of Matrices on 3D-Cultured Prostate Cancer Cells' Drug Response and Expression of Drug-Action Associated Proteins

    PubMed Central

    Edmondson, Rasheena; Adcock, Audrey F.; Yang, Liju

    2016-01-01

    This study investigated the effects of matrix on the behaviors of 3D-cultured cells of two prostate cancer cell lines, LNCaP and DU145. Two biologically-derived matrices, Matrigel and Cultrex BME, and one synthetic matrix, the Alvetex scaffold, were used to culture the cells. The cell proliferation rate, cellular response to anti-cancer drugs, and expression levels of proteins associated with drug sensitivity/resistance were examined and compared amongst the 3D-cultured cells on the three matrices and 2D-cultured cells. The cellular responses upon treatment with two common anti-cancer drugs, Docetaxel and Rapamycin, were examined. The expressions of epidermal growth factor receptor (EGFR) and β-III tubulin in DU145 cells and p53 in LNCaP cells were examined. The results showed that the proliferation rates of cells cultured on the three matrices varied, especially between the synthetic matrix and the biologically-derived matrices. The drug responses and the expressions of drug sensitivity-associated proteins differed between cells on various matrices as well. Among the 3D cultures on the three matrices, increased expression of β-III tubulin in DU145 cells was correlated with increased resistance to Docetaxel, and decreased expression of EGFR in DU145 cells was correlated with increased sensitivity to Rapamycin. Increased expression of a p53 dimer in 3D-cultured LNCaP cells was correlated with increased resistance to Docetaxel. Collectively, the results showed that the matrix of 3D cell culture models strongly influences cellular behaviors, which highlights the imperative need to achieve standardization of 3D cell culture technology in order to be used in drug screening and cell biology studies. PMID:27352049

  19. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion.

    PubMed

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information. PMID:26390036

  20. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion

    PubMed Central

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information. PMID:26390036

  1. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    PubMed Central

    Garrett, Amy; Reiss, Allan; Howe, Meghan; Kelley, Ryan; Singh, Manpreet; Adleman, Nancy; Karchemskiy, Asya; Chang, Kiki

    2012-01-01

    Objective Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late phase of activation, suggesting different temporal characteristics of brain responses. Method Twenty euthymic adolescents with familial BD (14 male) and twenty-one healthy control subjects (13 male) underwent fMRI scanning during presentation of happy, sad, and neutral facial expressions. Whole brain voxel-wise analyses were conducted in SPM5, using a 3-way analysis of variance (ANOVA) with factors group (BD and healthy control [HC]), facial expression (happy, sad, and neutral versus scrambled), and phase (early and late, corresponding to the first and second half of each block of faces). Results There were no significant group differences in task performance, age, gender, or IQ. Significant activation from the Main Effect of Group included greater DLPFC activation in the HC group, and greater amygdala/hippocampal activation in the BD group. The interaction of Group X Phase identified clusters in the superior temporal sulcus/insula and visual cortex, where activation increased from the early to late phase of the block for the BD but not the HC group. Conclusions These findings are consistent with previous studies that suggest deficient prefrontal cortex regulation of heightened amygdala response to emotional stimuli in pediatric BD. Increasing activation over time in superior temporal and visual cortices suggests difficulty processing or disengaging attention from emotional faces in BD. PMID:22840553

  2. Evaluation of the usability of a serious game aiming to teach facial expressions to schizophrenic patients.

    PubMed

    Isleyen, Filiz; Gulkesen, K Hakan; Cinemre, Buket; Samur, M Kemal; Zayim, Nese; Sen Kaya, Semiha

    2014-01-01

    In some psychological disorders such as autism and schizophrenia, loss of facial expression recognition skill may complicate patient's daily life. Information technology may help to develop facial expression recognition skill by educational software and games. We designed and developed an interactive web-based educational program with which we performed a usability study before investigating its effectiveness on the schizophrenia patients' ability of emotion perception. The purpose of this study is to describe the usability evaluation for a web-based game set that has been designed to teach facial expressions to schizophrenic patients. The usability study was done at two steps; first, we applied heuristic evaluation and the violations were rated in a scale from most to least severe and the major problems were solved. In the second step, think-aloud method was used and the web site was assessed by five schizophrenic patients. Eight experts participated in the heuristic evaluation, in which a total of 60 violations were identified with a mean severity of 2.77 (range: 0-4). All of the major problems (severity over 2.5) were listed and the usability problems were solved by the development team. After solving the problems, five users with a diagnosis of schizophrenia used the web site with the same scenario. They reported to have experienced minor, but different problems. In conclusion, we suggest that a combination of heuristic evaluation and think-aloud method may be an effective and efficient way for usability evaluations for the serious games that have been designed for special patient groups. PMID:25160269

  3. Improving social understanding of individuals of intellectual and developmental disabilities through a 3D-facail expression intervention program.

    PubMed

    Cheng, Yufang; Chen, Shuhui

    2010-01-01

    Individuals with intellectual and developmental disabilities (IDD) have specific difficulties in cognitive social-emotional capability, which affect numerous aspects of social competence. This study evaluated the learning effects of using 3D-emotion system intervention program for individuals with IDD in learning socially based-emotions capability in social contexts. The 3D-emotion system involves three stages with 24 questions with designed different social events. The experimental study was to evaluate using a single subject design on three participants with IDD for identifying the effects of 3D-emotion system intervention program; and the collected data of using this system and informal interview with the participants' were involved. The results showed that three participants had significant positive effects on using of the 3D-emotion system intervention program, and in terms of follow-up learning have been discussed in this paper. PMID:20674267

  4. HIF-2α Expression Regulates Sprout Formation into 3D Fibrin Matrices in Prolonged Hypoxia in Human Microvascular Endothelial Cells

    PubMed Central

    Nauta, Tessa D.; Duyndam, Monique C. A.; Weijers, Ester M.; van Hinsbergh, Victor M. W.; Koolwijk, Pieter

    2016-01-01

    Background During short-term hypoxia, Hypoxia Inducible Factors (particular their subunits HIF-1α and HIF-2α) regulate the expression of many genes including the potent angiogenesis stimulator VEGF. However, in some pathological conditions chronic hypoxia occurs and is accompanied by reduced angiogenesis. Objectives We investigated the effect of prolonged hypoxia on the proliferation and sprouting ability of human microvascular endothelial cells and the involvement of the HIFs and Dll4/Notch signaling. Methods and Results Human microvascular endothelial cells (hMVECs), cultured at 20% oxygen for 14 days and seeded on top of 3D fibrin matrices, formed sprouts when stimulated with VEGF-A/TNFα. In contrast, hMVECs precultured at 1% oxygen for 14 days were viable and proliferative, but did not form sprouts into fibrin upon VEGF-A/TNFα stimulation at 1% oxygen. Silencing of HIF-2α with si-RNA partially restored the inhibition of endothelial sprouting, whereas HIF-1α or HIF-3α by si-RNA had no effect. No involvement of Dll4/Notch pathway in the inhibitory effect on endothelial sprouting by prolonged hypoxia was found. In addition, hypoxia decreased the production of urokinase-type plasminogen activator (uPA), needed for migration and invasion, without a significant effect on its inhibitor PAI-1. This was independent of HIF-2α, as si-HIF-2α did not counteract uPA reduction. Conclusion Prolonged culturing of hMVECs at 1% oxygen inhibited endothelial sprouting into fibrin. Two independent mechanisms contribute. Silencing of HIF-2α with si-RNA partially restored the inhibition of endothelial sprouting pointing to a HIF-2α-dependent mechanism. In addition, reduction of uPA contributed to reduced endothelial tube formation in a fibrin matrix during prolonged hypoxia. PMID:27490118

  5. Inducing a Concurrent Motor Load Reduces Categorization Precision for Facial Expressions

    PubMed Central

    2015-01-01

    Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base. PMID:26618622

  6. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses

    PubMed Central

    Liu, Tongran; Xiao, Tong; Shi, Jiannong

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information. PMID:27065927

  7. [Psychophysiological signs of high-flexible forms of set on the emotionally negative facial expression].

    PubMed

    Kostandov, E A; Cheremushkin, E A

    2013-01-01

    In a series of studies of influence of past experience on an recognition of emotionally negative facial expression where obtained experimental facts which we consider as the formation signs of under certain conditions high-plastic cognitive (flexible) sets ("not fixed sets", according to D.N. Uznadze) when there switching or updating was not accompanied by illusory distortion of recognition. The studies of this form of set revealed: 1--the induced reaction of synchronization of a teta-rhythm to target stimulus was larger, than in cases with rigid set; 2--the induced reaction of an alpha rhythm to target stimulus is expressed in its synchronization, at the others cases--in a desynchronization; 3--at increase in a time interval between target and trigger stimuli observe alpha rhythm synchronization in the prestimulus period and in time intervals between them, at other cases it wasn't observed; 4--at children this form of set observed at the age of 10-11 years when "mature" set on an facial expression is formed. PMID:23866605

  8. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision.