Science.gov

Sample records for 3d facial expression

  1. 3D facial landmark detection under large yaw and expression variations.

    PubMed

    Perakis, Panagiotis; Passalis, Georgios; Theoharis, Theoharis; Kakadiaris, Ioannis A

    2013-07-01

    A 3D landmark detection method for 3D facial scans is presented and thoroughly evaluated. The main contribution of the presented method is the automatic and pose-invariant detection of landmarks on 3D facial scans under large yaw variations (that often result in missing facial data), and its robustness against large facial expressions. Three-dimensional information is exploited by using 3D local shape descriptors to extract candidate landmark points. The shape descriptors include the shape index, a continuous map of principal curvature values of a 3D object's surface, and spin images, local descriptors of the object's 3D point distribution. The candidate landmarks are identified and labeled by matching them with a Facial Landmark Model (FLM) of facial anatomical landmarks. The presented method is extensively evaluated against a variety of 3D facial databases and achieves state-of-the-art accuracy (4.5-6.3 mm mean landmark localization error), considerably outperforming previous methods, even when tested with the most challenging data.

  2. Use of 3D faces facilitates facial expression recognition in children

    PubMed Central

    Wang, Lamei; Chen, Wenfeng; Li, Hong

    2017-01-01

    This study assessed whether presenting 3D face stimuli could facilitate children’s facial expression recognition. Seventy-one children aged between 3 and 6 participated in the study. Their task was to judge whether a face presented in each trial showed a happy or fearful expression. Half of the face stimuli were shown with 3D representations, whereas the other half of the images were shown as 2D pictures. We compared expression recognition under these conditions. The results showed that the use of 3D faces improved the speed of facial expression recognition in both boys and girls. Moreover, 3D faces improved boys’ recognition accuracy for fearful expressions. Since fear is the most difficult facial expression for children to recognize, the facilitation effect of 3D faces has important practical implications for children with difficulties in facial expression recognition. The potential benefits of 3D representation for other expressions also have implications for developing more realistic assessments of children’s expression recognition. PMID:28368008

  3. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  4. Coupled Dictionary Learning for the Detail-Enhanced Synthesis of 3-D Facial Expressions.

    PubMed

    Liang, Haoran; Liang, Ronghua; Song, Mingli; He, Xiaofei

    2016-04-01

    The desire to reconstruct 3-D face models with expressions from 2-D face images fosters increasing interest in addressing the problem of face modeling. This task is important and challenging in the field of computer animation. Facial contours and wrinkles are essential to generate a face with a certain expression; however, these details are generally ignored or are not seriously considered in previous studies on face model reconstruction. Thus, we employ coupled radius basis function networks to derive an intermediate 3-D face model from a single 2-D face image. To optimize the 3-D face model further through landmarks, a coupled dictionary that is related to 3-D face models and their corresponding 3-D landmarks is learned from the given training set through local coordinate coding. Another coupled dictionary is then constructed to bridge the 2-D and 3-D landmarks for the transfer of vertices on the face model. As a result, the final 3-D face can be generated with the appropriate expression. In the testing phase, the 2-D input faces are converted into 3-D models that display different expressions. Experimental results indicate that the proposed approach to facial expression synthesis can obtain model details more effectively than previous methods can.

  5. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  6. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  7. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  8. The Chinese Facial Emotion Recognition Database (CFERD): a computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2012-12-30

    The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1

  9. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    PubMed

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots.

  10. Improving Social Understanding of Individuals of Intellectual and Developmental disabilities through a 3D-Facial Expression Intervention Program

    ERIC Educational Resources Information Center

    Cheng, Yufang; Chen, Shuhui

    2010-01-01

    Individuals with intellectual and developmental disabilities (IDD) have specific difficulties in cognitive social-emotional capability, which affect numerous aspects of social competence. This study evaluated the learning effects of using 3D-emotion system intervention program for individuals with IDD in learning socially based-emotions capability…

  11. [Pathophysiological diagnosis of facial paralysis using 3-D MRI].

    PubMed

    Ishihara, T; Hirata, K; Yuki, N; Sato, T

    2001-04-01

    Bilateral facial paralysis(facial diplesia) is often observed in Guillain-Barré syndrome(GBS) and Fisher's syndrome (FS). We tried to observe injured facial nerves using three-dimensional(3-D) MRI in facial diplesia due to GBS and its variants and examined function of blood nerve barrier and clinical use of 3-D MRI for detecting injured facial nerves. In the four patients with GBS and its variants(GBS three cases, FS one case), while routine brain MRI did not show any abnormal findings, contrast-enhanced 3-D MRI revealed Gd-enhancement of the facial nerves. On the other hand, only one case showed visualization using contrast-enhanced 3-D MRI in twelve cases of Bell's palsy. Therefore, it may be presumed that the reason why the significantly higher rate of visualization in facial paralysis in GBS and its variants than in Bell's palsy is attributable to a difference in the mechanism of injury or the extreme seriousness of the disease. In conclusion, the observation of facial nerve using 3-D MRI was very useful to know the condition of the facial diplesia in GBS and its variants.

  12. Modeling 3D facial shape from DNA.

    PubMed

    Claes, Peter; Liberton, Denise K; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E; Pearson, Laurel N; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A; Yao, Wei; Tang, Hua; Barsh, Gregory S; Absher, Devin M; Puts, David A; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K; Boster, James S; Shriver, Mark D

    2014-03-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers.

  13. Modeling 3D Facial Shape from DNA

    PubMed Central

    Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.

    2014-01-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  14. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  15. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  16. Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

    ERIC Educational Resources Information Center

    Wang, Yushun; Zhuang, Yueting

    2008-01-01

    Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…

  17. Holistic facial expression classification

    NASA Astrophysics Data System (ADS)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  18. Facial expression and sarcasm.

    PubMed

    Rockwell, P

    2001-08-01

    This study examined facial expression in the presentation of sarcasm. 60 responses (sarcastic responses = 30, nonsarcastic responses = 30) from 40 different speakers were coded by two trained coders. Expressions in three facial areas--eyebrow, eyes, and mouth--were evaluated. Only movement in the mouth area significantly differentiated ratings of sarcasm from nonsarcasm.

  19. Assessment of some problematic factors in facial image identification using a 2D/3D superimposition technique.

    PubMed

    Atsuchi, Masaru; Tsuji, Akiko; Usumoto, Yosuke; Yoshino, Mineo; Ikeda, Noriaki

    2013-09-01

    The number of criminal cases requiring facial image identification of a suspect has been increasing because a surveillance camera is installed everywhere in the city and furthermore, the intercom with the recording function is installed in the home. In this study, we aimed to analyze the usefulness of a 2D/3D facial image superimposition system for image identification when facial aging, facial expression, and twins are under consideration. As a result, the mean values of the average distances calculated from the 16 anatomical landmarks between the 3D facial images of the 50s groups and the 2D facial images of the 20s, 30s, and 40s groups were 2.6, 2.3, and 2.2mm, respectively (facial aging). The mean values of the average distances calculated from 12 anatomical landmarks between the 3D normal facial images and four emotional expressions were 4.9 (laughter), 2.9 (anger), 2.9 (sadness), and 3.6mm (surprised), respectively (facial expressions). The average distance obtained from 11 anatomical landmarks between the same person in twins was 1.1mm, while the average distance between different person in twins was 2.0mm (twins). Facial image identification using the 2D/3D facial image superimposition system demonstrated adequate statistical power and identified an individual with high accuracy, suggesting its usefulness. However, computer technology concerning video image processing and superimpose progress, there is a need to keep familiar with the morphology and anatomy as its base.

  20. An optical real-time 3D measurement for analysis of facial shape and movement

    NASA Astrophysics Data System (ADS)

    Zhang, Qican; Su, Xianyu; Chen, Wenjing; Cao, Yiping; Xiang, Liqun

    2003-12-01

    Optical non-contact 3-D shape measurement provides a novel and useful tool for analysis of facial shape and movement in presurgical and postsurgical regular check. In this article we present a system, which allows a precise 3-D visualization of the patient's facial before and after craniofacial surgery. We discussed, in this paper, the real time 3-D image capture, processing and the 3-D phase unwrapping method to recover complex shape deformation when the movement of the mouth. The result of real-time measurement for facial shape and movement will be helpful for the more ideal effect in plastic surgery.

  1. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  2. Analyzing the relevance of shape descriptors in automated recognition of facial gestures in 3D images

    NASA Astrophysics Data System (ADS)

    Rodriguez A., Julian S.; Prieto, Flavio

    2013-03-01

    The present document shows and explains the results from analyzing shape descriptors (DESIRE and Spherical Spin Image) for facial recognition of 3D images. DESIRE is a descriptor made of depth images, silhouettes and rays extended from a polygonal mesh; whereas the Spherical Spin Image (SSI) associated to a polygonal mesh point, is a 2D histogram built from neighboring points by using the position information that captures features of the local shape. The database used contains images of facial expressions which in average were recognized 88.16% using a neuronal network and 91.11% with a Bayesian classifier in the case of the first descriptor; in contrast, the second descriptor only recognizes in average 32% and 23,6% using the same mentioned classifiers respectively.

  3. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  4. Anthropological facial approximation in three dimensions (AFA3D): computer-assisted estimation of the facial morphology using geometric morphometrics.

    PubMed

    Guyomarc'h, Pierre; Dutailly, Bruno; Charton, Jérôme; Santos, Frédéric; Desbarats, Pascal; Coqueugniot, Hélène

    2014-11-01

    This study presents Anthropological Facial Approximation in Three Dimensions (AFA3D), a new computerized method for estimating face shape based on computed tomography (CT) scans of 500 French individuals. Facial soft tissue depths are estimated based on age, sex, corpulence, and craniometrics, and projected using reference planes to obtain the global facial appearance. Position and shape of the eyes, nose, mouth, and ears are inferred from cranial landmarks through geometric morphometrics. The 100 estimated cutaneous landmarks are then used to warp a generic face to the target facial approximation. A validation by re-sampling on a subsample demonstrated an average accuracy of c. 4 mm for the overall face. The resulting approximation is an objective probable facial shape, but is also synthetic (i.e., without texture), and therefore needs to be enhanced artistically prior to its use in forensic cases. AFA3D, integrated in the TIVMI software, is available freely for further testing.

  5. Retinotopy of facial expression adaptation.

    PubMed

    Matsumiya, Kazumichi

    2014-01-01

    The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.

  6. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  7. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  8. Digital 3D facial reconstruction of George Washington

    NASA Astrophysics Data System (ADS)

    Razdan, Anshuman; Schwartz, Jeff; Tocheri, Mathew; Hansford, Dianne

    2006-02-01

    PRISM is a focal point of interdisciplinary research in geometric modeling, computer graphics and visualization at Arizona State University. Many projects in the last ten years have involved laser scanning, geometric modeling and feature extraction from such data as archaeological vessels, bones, human faces, etc. This paper gives a brief overview of a recently completed project on the 3D reconstruction of George Washington (GW). The project brought together forensic anthropologists, digital artists and computer scientists in the 3D digital reconstruction of GW at 57, 45 and 19 including detailed heads and bodies. Although many other scanning projects such as the Michelangelo project have successfully captured fine details via laser scanning, our project took it a step further, i.e. to predict what that individual (in the sculpture) might have looked like both in later and earlier years, specifically the process to account for reverse aging. Our base data was GWs face mask at Morgan Library and Hudons bust of GW at Mount Vernon, both done when GW was 53. Additionally, we scanned the statue at the Capitol in Richmond, VA; various dentures, and other items. Other measurements came from clothing and even portraits of GW. The digital GWs were then milled in high density foam for a studio to complete the work. These will be unveiled at the opening of the new education center at Mt Vernon in fall 2006.

  9. Evolution of 3D surface imaging systems in facial plastic surgery.

    PubMed

    Tzou, Chieh-Han John; Frey, Manfred

    2011-11-01

    Recent advancements in computer technologies have propelled the development of 3D imaging systems. 3D surface-imaging is taking surgeons to a new level of communication with patients; moreover, it provides quick and standardized image documentation. This article recounts the chronologic evolution of 3D surface imaging, and summarizes the current status of today's facial surface capturing technology. This article also discusses current 3D surface imaging hardware and software, and their different techniques, technologies, and scientific validation, which provides surgeons with the background information necessary for evaluating the systems and knowledge about the systems they might incorporate into their own practice.

  10. Measuring facial expression of emotion

    PubMed Central

    Wolf, Karsten

    2015-01-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress. PMID:26869846

  11. Measuring facial expression of emotion.

    PubMed

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  12. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  13. Development of a 3-D data acquisition system for human facial imaging

    NASA Astrophysics Data System (ADS)

    Marshall, Stephen J.; Rixon, R. C.; Whiteford, Don N.; Wells, Peter J.; Powell, S. J.

    1990-07-01

    While preparing to conduct human facial surgery, it is necessary to visualise the effects of proposed surgery on the patient's appearance. This visualisation is of great benefit to both surgeon and patient, and has traditionally been achieved by the manual manipulation of photographs. Technological developments in the areas of computer-aided design and optical sensing now make it possible to construct a computer-based imaging system which can simulate the effects of facial surgery on patients. A collaborative project with the aim of constructing a prototype facial imaging system is under way between the National Engineering Laboratory and St George's Hospital. The proposed system will acquire, display and manipulate 3-dimensional facial images of patients requiring facial surgery. The feasibility of using two NEL developed optical measurement methods for 3-D facial data acquisition had been established by their successful application to the measurement of dummy heads. The two optical measurement systems, the NEL Auto-MATE moire fringe contouring system and the NEL STRIPE laser scanning triangulation system, were further developed to adapt them for use in facial imaging and additional tests carried out in which emphasis was placed on the use of live human subjects. The knowledge gained in the execution of the tests enabled the selection of the most suitable of the two methods studied for facial data acquisition. A full description of the methods and equipment used in the study will be given. Additionally, work on the effects of the quality and quantity of measurement data on the facial image will be described. Finally, the question of how best to provide display and manipulation of the facial images will be addressed.

  14. Genetic and Environmental Contributions to Facial Morphological Variation: A 3D Population-Based Twin Study

    PubMed Central

    Djordjevic, Jelena; Zhurov, Alexei I.; Richmond, Stephen

    2016-01-01

    Introduction Facial phenotype is influenced by genes and environment; however, little is known about their relative contributions to normal facial morphology. The aim of this study was to assess the relative genetic and environmental contributions to facial morphological variation using a three-dimensional (3D) population-based approach and the classical twin study design. Materials and Methods 3D facial images of 1380 female twins from the TwinsUK Registry database were used. All faces were landmarked, by manually placing 37 landmark points, and Procrustes registered. Three groups of traits were extracted and analysed: 19 principal components (uPC) and 23 principal components (sPC), derived from the unscaled and scaled landmark configurations respectively, and 1275 linear distances measured between 51 landmarks (37 manually identified and 14 automatically calculated). The intraclass correlation coefficients, rMZ and rDZ, broad-sense heritability (h2), common (c2) and unique (e2) environment contributions were calculated for all traits for the monozygotic (MZ) and dizygotic (DZ) twins. Results Heritability of 13 uPC and 17 sPC reached statistical significance, with h2 ranging from 38.8% to 78.5% in the former and 30.5% to 84.8% in the latter group. Also, 1222 distances showed evidence of genetic control. Common environment contributed to one PC in both groups and 53 linear distances (4.3%). Unique environment contributed to 17 uPC and 20 sPC and 1245 distances. Conclusions Genetic factors can explain more than 70% of the phenotypic facial variation in facial size, nose (width, prominence and height), lips prominence and inter-ocular distance. A few traits have shown potential dominant genetic influence: the prominence and height of the nose, the lower lip prominence in relation to the chin and upper lip philtrum length. Environmental contribution to facial variation seems to be the greatest for the mandibular ramus height and horizontal facial asymmetry. PMID

  15. Cortical control of facial expression.

    PubMed

    Müri, René M

    2016-06-01

    The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system.

  16. Compound facial expressions of emotion

    PubMed Central

    Du, Shichuan; Tao, Yong; Martinez, Aleix M.

    2014-01-01

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories—happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another. PMID:24706770

  17. A 2D range Hausdorff approach to 3D facial recognition.

    SciTech Connect

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

  18. The use of 3D planning in facial surgery: preliminary observations.

    PubMed

    Hoarau, R; Zweifel, D; Simon, C; Broome, M

    2014-12-01

    Three-dimensional (3D) planning is becoming a more commonly used tool in maxillofacial surgery. At first used only virtually, 3D planning now also enables the creation of useful intraoperative aids such as cutting guides, which decrease the operative difficulty. In our center, we have used 3D planning in various domains of facial surgery and have investigated the advantages of this technique. We have also addressed the difficulties associated with its use. 3D planning increases the accuracy of reconstructive surgery, decreases operating time, whilst maintaining excellent esthetic results. However, its use is restricted to osseous reconstruction at this stage and once planning has been undertaken, it cannot be reversed or altered intraoperatively. Despite the attractive nature of this new tool, its uses and practicalities must be further evaluated. In particular, cost-effectiveness, hospital stay, and patient perceived benefits must be assessed.

  19. 3-D finite element modelling of facial soft tissue and preliminary application in orthodontics.

    PubMed

    Chen, Si; Lou, Hangdi; Guo, Liang; Rong, Qiguo; Liu, Yi; Xu, Tian-Min

    2012-01-01

    Prediction of soft tissue aesthetics is important for achieving an optimal outcome in orthodontic treatment planning. Previously, applicable procedures were mainly restricted to 2-D profile prediction. In this study, a generic 3-D finite element (FE) model of the craniofacial soft and hard tissue was constructed, and individualisation of the generic model based on cone beam CT data and mathematical transformation was investigated. The result indicated that patient-specific 3-D facial FE model including different layers of soft tissue could be obtained through mathematical model transformation. Average deviation between the transformed model and the real reconstructed one was 0.47 ± 0.77 mm and 0.75 ± 0.84 mm in soft and hard tissue, respectively. With boundary condition defined according to treatment plan, such FE model could be used to predict the result of orthodontic treatment on facial soft tissue.

  20. Interaction between facial expression and color

    PubMed Central

    Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki

    2017-01-01

    Facial color varies depending on emotional state, and emotions are often described in relation to facial color. In this study, we investigated whether the recognition of facial expressions was affected by facial color and vice versa. In the facial expression task, expression morph continua were employed: fear-anger and sadness-happiness. The morphed faces were presented in three different facial colors (bluish, neutral, and reddish color). Participants identified a facial expression between the two endpoints (e.g., fear or anger) regardless of its facial color. The results showed that the perception of facial expression was influenced by facial color. In the fear-anger morphs, intermediate morphs of reddish-colored and bluish colored faces had a greater tendency to be identified as angry faces and fearful faces, respectively. In the facial color task, two bluish-to-reddish colored face continua were presented in three different facial expressions (fear-neutral-anger and sadness-neutral-happiness). Participants judged whether the facial color was reddish or bluish regardless of its expression. The faces with sad expression tended to be identified as more bluish, while the faces with other expressions did not affect facial color judgment. These results suggest that an interactive but disproportionate relationship exists between facial color and expression in face perception. PMID:28117349

  1. An Automatic 3D Facial Landmarking Algorithm Using 2D Gabor Wavelets.

    PubMed

    de Jong, Markus A; Wollstein, Andreas; Ruff, Clifford; Dunaway, David; Hysi, Pirro; Spector, Tim; Fan Liu; Niessen, Wiro; Koudstaal, Maarten J; Kayser, Manfred; Wolvius, Eppo B; Bohringer, Stefan

    2016-02-01

    In this paper, we present a novel approach to automatic 3D facial landmarking using 2D Gabor wavelets. Our algorithm considers the face to be a surface and uses map projections to derive 2D features from raw data. Extracted features include texture, relief map, and transformations thereof. We extend an established 2D landmarking method for simultaneous evaluation of these data. The method is validated by performing landmarking experiments on two data sets using 21 landmarks and compared with an active shape model implementation. On average, landmarking error for our method was 1.9 mm, whereas the active shape model resulted in an average landmarking error of 2.3 mm. A second study investigating facial shape heritability in related individuals concludes that automatic landmarking is on par with manual landmarking for some landmarks. Our algorithm can be trained in 30 min to automatically landmark 3D facial data sets of any size, and allows for fast and robust landmarking of 3D faces.

  2. Detecting Genetic Association of Common Human Facial Morphological Variation Using High Density 3D Image Registration

    PubMed Central

    Hu, Sile; Zhou, Hang; Guo, Jing; Jin, Li; Tang, Kun

    2013-01-01

    Human facial morphology is a combination of many complex traits. Little is known about the genetic basis of common facial morphological variation. Existing association studies have largely used simple landmark-distances as surrogates for the complex morphological phenotypes of the face. However, this can result in decreased statistical power and unclear inference of shape changes. In this study, we applied a new image registration approach that automatically identified the salient landmarks and aligned the sample faces using high density pixel points. Based on this high density registration, three different phenotype data schemes were used to test the association between the common facial morphological variation and 10 candidate SNPs, and their performances were compared. The first scheme used traditional landmark-distances; the second relied on the geometric analysis of 15 landmarks and the third used geometric analysis of a dense registration of ∼30,000 3D points. We found that the two geometric approaches were highly consistent in their detection of morphological changes. The geometric method using dense registration further demonstrated superiority in the fine inference of shape changes and 3D face modeling. Several candidate SNPs showed potential associations with different facial features. In particular, one SNP, a known risk factor of non-syndromic cleft lips/palates, rs642961 in the IRF6 gene, was validated to strongly predict normal lip shape variation in female Han Chinese. This study further demonstrated that dense face registration may substantially improve the detection and characterization of genetic association in common facial variation. PMID:24339768

  3. Mapping and Manipulating Facial Expression

    ERIC Educational Resources Information Center

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  4. Using Facial Symmetry to Handle Pose Variations in Real-World 3D Face Recognition.

    PubMed

    Passalis, Georgios; Perakis, Panagiotis; Theoharis, Theoharis; Kakadiaris, Ioannis A

    2011-10-01

    The uncontrolled conditions of real-world biometric applications pose a great challenge to any face recognition approach. The unconstrained acquisition of data from uncooperative subjects may result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions, resulting in missing data. In this paper, a novel 3D face recognition method is proposed that uses facial symmetry to handle pose variations. It employs an automatic landmark detector that estimates pose and detects occluded areas for each facial scan. Subsequently, an Annotated Face Model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data. The result is a pose invariant geometry image. Unlike existing methods that require frontal scans, the proposed method performs comparisons among interpose scans using a wavelet-based biometric signature. It is suitable for real-world applications as it only requires half of the face to be visible to the sensor. The proposed method was evaluated using databases from the University of Notre Dame and the University of Houston that, to the best of our knowledge, include the most challenging pose variations publicly available. The average rank-one recognition rate of the proposed method in these databases was 83.7 percent.

  5. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  6. 3D animation of facial plastic surgery based on computer graphics

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  7. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  8. A coordinate-free method for the analysis of 3D facial change

    NASA Astrophysics Data System (ADS)

    Mao, Zhili; Siebert, Jan Paul; Cockshott, W. Paul; Ayoub, Ashraf Farouk

    2004-05-01

    Euclidean Distance Matrix Analysis (EDMA) is widely held as the most important coordinate-free method by which to analyze landmarks. It has been used extensively in the field of medical anthropometry and has already produced many useful results. Unfortunately this method renders little information regarding the surface on which these points are located and accordingly is inadequate for the 3D analysis of surface anatomy. Here we shall present a new inverse surface flatness metric, the ratio between the Geodesic and the Euclidean inter-landmark distances. Because this metric also only reflects one aspect of three-dimensional shape, i.e. surface flatness, we have combined it with the Euclidean distance to investigate 3D facial change. The goal of this investigation is to be able to analyze three-dimensional facial change in terms of bilateral symmetry as encoded both by surface flatness and by geometric configuration. Our initial study, based on 25 models of surgically managed children (unilateral cleft lip repair) and 40 models of control children at the age of 2 years, indicates that the faces of the surgically managed group were found to be significantly less symmetric than those of the control group in terms of surface flatness, geometric configuration and overall symmetry.

  9. Neuroticism Delays Detection of Facial Expressions.

    PubMed

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions.

  10. Neuroticism Delays Detection of Facial Expressions

    PubMed Central

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions. PMID:27073904

  11. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations.

  12. Measured symmetry of facial 3D shape and perceived facial symmetry and attractiveness before and after orthognathic surgery.

    PubMed

    Ostwald, Julia; Berssenbrügge, Philipp; Dirksen, Dieter; Runte, Christoph; Wermker, Kai; Kleinheinz, Johannes; Jung, Susanne

    2015-05-01

    One aim of cranio-maxillo-facial surgery is to strive for an esthetical appearance. Do facial symmetry and attractiveness correlate? How are they affected by surgery? Within this study faces of patients with orthognathic surgery were captured and analyzed regarding their symmetry. A total of 25 faces of patients were measured three-dimensionally by an optical sensor using the fringe projection technique before and after orthognathic surgery. Based upon this data an asymmetry index was calculated for each case. In order to gather subjective ratings each face was presented to 100 independent test subjects in a 3D rotation sequence. Those were asked to rate the symmetry and the attractiveness of the faces. It was analyzed to what extend the ratings correlate with the measured asymmetry indices and whether pre- and post-surgical data differ. The measured asymmetry indices correlate significantly with the subjective ratings of both items. The measured symmetry as well as the rated symmetry and attractiveness increased on average after surgery. The increase of the ratings was even statistically significant. A larger enhancement of symmetry is achieved in pre-surgical strongly asymmetric faces than in rather symmetric faces.

  13. Man-machine collaboration using facial expressions

    NASA Astrophysics Data System (ADS)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  14. Establishing point correspondence of 3D faces via sparse facial deformable model.

    PubMed

    Pan, Gang; Zhang, Xiaobo; Wang, Yueming; Hu, Zhenfang; Zheng, Xiaoxiang; Wu, Zhaohui

    2013-11-01

    Establishing a dense vertex-to-vertex anthropometric correspondence between 3D faces is an important and fundamental problem in 3D face research, which can contribute to most applications of 3D faces. This paper proposes a sparse facial deformable model to automatically achieve this task. For an input 3D face, the basic idea is to generate a new 3D face that has the same mesh topology as a reference face and the highly similar shape to the input face, and whose vertices correspond to those of the reference face in an anthropometric sense. Two constraints: 1) the shape constraint and 2) correspondence constraint are modeled in our method to satisfy the three requirements. The shape constraint is solved by a novel face deformation approach in which a normal-ray scheme is integrated to the closest-vertex scheme to keep high-curvature shapes in deformation. The correspondence constraint is based on an assumption that if the vertices on 3D faces are corresponded, their shape signals lie on a manifold and each face signal can be represented sparsely by a few typical items in a dictionary. The dictionary can be well learnt and contains the distribution information of the corresponded vertices. The correspondence information can be conveyed to the sparse representation of the generated 3D face. Thus, a patch-based sparse representation is proposed as the correspondence constraint. By solving the correspondence constraint iteratively, the vertices of the generated face can be adjusted to correspondence positions gradually. At the early iteration steps, smaller sparsity thresholds are set that yield larger representation errors but better globally corresponded vertices. At the later steps, relatively larger sparsity thresholds are used to encode local shapes. By this method, the vertices in the new face approach the right positions progressively until the final global correspondence is reached. Our method is automatic, and the manual work is needed only in training procedure

  15. 3D-Ultrasonography for evaluation of facial muscles in patients with chronic facial palsy or defective healing: a pilot study

    PubMed Central

    2014-01-01

    Background While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. Methods A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. Results The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. Conclusion 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment. PMID:24782657

  16. 4-D facial expression recognition by learning geometric deformations.

    PubMed

    Ben Amor, Boulbaba; Drira, Hassen; Berretti, Stefano; Daoudi, Mohamed; Srivastava, Anuj

    2014-12-01

    In this paper, we present an automatic approach for facial expression recognition from 3-D video sequences. In the proposed solution, the 3-D faces are represented by collections of radial curves and a Riemannian shape analysis is applied to effectively quantify the deformations induced by the facial expressions in a given subsequence of 3-D frames. This is obtained from the dense scalar field, which denotes the shooting directions of the geodesic paths constructed between pairs of corresponding radial curves of two faces. As the resulting dense scalar fields show a high dimensionality, Linear Discriminant Analysis (LDA) transformation is applied to the dense feature space. Two methods are then used for classification: 1) 3-D motion extraction with temporal Hidden Markov model (HMM) and 2) mean deformation capturing with random forest. While a dynamic HMM on the features is trained in the first approach, the second one computes mean deformations under a window and applies multiclass random forest. Both of the proposed classification schemes on the scalar fields showed comparable results and outperformed earlier studies on facial expression recognition from 3-D video sequences.

  17. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  18. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    PubMed

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  19. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  20. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  1. Facial expressions recognition with an emotion expressive robotic head

    NASA Astrophysics Data System (ADS)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  2. 3D confocal reconstruction of gene expression in mouse.

    PubMed

    Hecksher-Sørensen, J; Sharpe, J

    2001-01-01

    Three-dimensional computer reconstructions of gene expression data will become a valuable tool in biomedical research in the near future. However, at present the process of converting in situ expression data into 3D models is a highly specialized and time-consuming procedure. Here we present a method which allows rapid reconstruction of whole-mount in situ data from mouse embryos. Mid-gestation embryos were stained with the alkaline phosphotase substrate Fast Red, which can be detected using confocal laser scanning microscopy (CLSM), and cut into 70 microm sections. Each section was then scanned and digitally reconstructed. Using this method it took two days to section, digitize and reconstruct the full expression pattern of Shh in an E9.5 embryo (a 3D model of this embryo can be seen at genex.hgu.mrc.ac.uk). Additionally we demonstrate that this technique allows gene expression to be studied at the single cell level in intact tissue.

  3. A framework for the recognition of 3D faces and expressions

    NASA Astrophysics Data System (ADS)

    Li, Chao; Barreto, Armando

    2006-04-01

    Face recognition technology has been a focus both in academia and industry for the last couple of years because of its wide potential applications and its importance to meet the security needs of today's world. Most of the systems developed are based on 2D face recognition technology, which uses pictures for data processing. With the development of 3D imaging technology, 3D face recognition emerges as an alternative to overcome the difficulties inherent with 2D face recognition, i.e. sensitivity to illumination conditions and orientation positioning of the subject. But 3D face recognition still needs to tackle the problem of deformation of facial geometry that results from the expression changes of a subject. To deal with this issue, a 3D face recognition framework is proposed in this paper. It is composed of three subsystems: an expression recognition system, a system for the identification of faces with expression, and neutral face recognition system. A system for the recognition of faces with one type of expression (happiness) and neutral faces was implemented and tested on a database of 30 subjects. The results proved the feasibility of this framework.

  4. Horses discriminate between facial expressions of conspecifics

    PubMed Central

    Wathan, J.; Proops, L.; Grounds, K.; McComb, K.

    2016-01-01

    In humans, facial expressions are rich sources of social information and have an important role in regulating social interactions. However, the extent to which this is true in non-human animals, and particularly in non-primates, remains largely unknown. Therefore we tested whether domestic horses (Equus caballus) could discriminate between facial expressions of their conspecifics captured in different contexts, and whether viewing these expressions elicited functionally relevant reactions. Horses were more likely to approach photographic stimuli displaying facial expressions associated with positive attention and relaxation, and to avoid stimuli displaying an expression associated with aggression. Moreover, differing patterns of heart rate changes were observed in response to viewing the positive anticipation and agonistic facial expressions. These results indicate that horses spontaneously discriminate between photographs of unknown conspecifics portraying different facial expressions, showing appropriate behavioural and physiological responses. Thus horses, an animal far-removed from the primate lineage, also have the ability to use facial expressions as a means of gaining social information and potentially regulating social interactions. PMID:27995958

  5. Visualization and Analysis of 3D Gene Expression Data

    SciTech Connect

    Bethel, E. Wes; Rubel, Oliver; Weber, Gunther H.; Hamann, Bernd; Hagen, Hans

    2007-10-25

    Recent methods for extracting precise measurements ofspatial gene expression patterns from three-dimensional (3D) image dataopens the way for new analysis of the complex gene regulatory networkscontrolling animal development. To support analysis of this novel andhighly complex data we developed PointCloudXplore (PCX), an integratedvisualization framework that supports dedicated multi-modal, physical andinformation visualization views along with algorithms to aid in analyzingthe relationships between gene expression levels. Using PCX, we helpedour science stakeholders to address many questions in 3D gene expressionresearch, e.g., to objectively define spatial pattern boundaries andtemporal profiles of genes and to analyze how mRNA patterns arecontrolled by their regulatory transcription factors.

  6. The identification of unfolding facial expressions.

    PubMed

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  7. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  8. A Challenge to Classical Facial Proportionality Studies: Conventional Profile and 3d Photography Versus Silhouettes

    DTIC Science & Technology

    2012-04-01

    having excellent occlusion meant that facial esthetics had to be sacrificed. The idea of natural dentition stating teeth must fit together regardless...teeth on facial esthetics has become the primary objective of orthodontic treatment. Changes in the dentition affect the soft tissue which in turn

  9. Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome.

    PubMed

    Rives Bogart, Kathleen; Matsumoto, David

    2010-01-01

    According to the reverse simulation model of embodied simulation theory, we recognize others' emotions by subtly mimicking their expressions, which allows us to feel the corresponding emotion through facial feedback. Previous studies examining whether facial mimicry is necessary for facial expression recognition were limited by potentially distracting manipulations intended to artificially restrict facial mimicry or very small samples of people with facial paralysis. We addressed these limitations by collecting the largest sample to date of people with Moebius syndrome, a condition characterized by congenital bilateral facial paralysis. In this Internet-based study, 37 adults with Moebius syndrome and 37 matched control participants completed a facial expression recognition task. People with Moebius syndrome did not differ from the control group or normative data in emotion recognition accuracy, and accuracy was not related to extent of ability to produce facial expressions. Our results do not support the hypothesis that reverse simulation with facial mimicry is necessary for facial expression recognition.

  10. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    PubMed

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  11. Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach.

    PubMed

    Kakadiaris, Ioannis A; Passalis, Georgios; Toderici, George; Murtuza, Mohammed N; Lu, Yunliang; Karampatziakis, Nikos; Theoharis, Theoharis

    2007-04-01

    In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, Face Recognition Grand Challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality.

  12. Facial expression recognition in perceptual color space.

    PubMed

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation.

  13. The Effect of Ethnicity on 2D and 3D Frontomaxillary Facial Angle Measurement in the First Trimester

    PubMed Central

    Clarke, Jill

    2013-01-01

    Objectives. To determine the existence and extent of ethnic differences in 2D or 3D fetal frontomaxillary facial angle (FMFA) measurements. Methods. During routine 11–14 weeks nuchal translucency screening undertaken in a private ultrasound practice in Sydney, Australia, 2D images and 3D volumes of the fetal profile were collected from consenting patients. FMFA was measured on a frozen 2D ultrasound image in the appropriate plane and, after a delay of at least 48 hours, was also measured on the reconstructed 3D ultrasound volume offline. Results. Overall 416 patients were included in the study; 220 Caucasian, 108 north Asian, 36 east Asian and 52 south Asian patients. Caucasians had significantly lower median FMFA measurements than Asians in both 2D (2.2°; P < 0.001) and 3D (3.4°; P < 0.001) images. Median 2D measurements were significantly higher than 3D measurements in the Caucasian and south Asian groups (P < 0.001 and P = 0.04), but not in north and east Asian groups (P = 0.08 and P = 0.41). Conclusions. Significant ethnic variations in both 2D and 3D FMFA measurements exist. These differences may indicate the need to establish ethnic-specific reference ranges for both 2D and 3D imaging. PMID:24288543

  14. Shape-based classification of 3D facial data to support 22q11.2DS craniofacial research.

    PubMed

    Wilamowska, Katarzyna; Wu, Jia; Heike, Carrie; Shapiro, Linda

    2012-06-01

    3D imaging systems are used to construct high-resolution meshes of patient's heads that can be analyzed by computer algorithms. Our work starts with such 3D head meshes and produces both global and local descriptors of 3D shape. Since these descriptors are numeric feature vectors, they can be used in both classification and quantification of various different abnormalities. In this paper, we define these descriptors, describe our methodology for constructing them from 3D head meshes, and show through a set of classification experiments involving cases and controls for a genetic disorder called 22q11.2 deletion syndrome that they are suitable for use in craniofacial research studies. The main contributions of this work include: automatic generation of novel global and local data representations, robust automatic placement of anthropometric landmarks, generation of local descriptors for nasal and oral facial features from landmarks, use of local descriptors for predicting various local facial features, and use of global features for 22q11.2DS classification, showing their potential use as descriptors in craniofacial research.

  15. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    PubMed

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  16. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Baurecht, H; Schwenzer-Zimmerer, K; Papadopulos, N A; Papadopoulos, M A; Sader, R; Biemer, E; Zeilhofer, H F

    2006-06-01

    Three-dimensional (3-D) recording of the surface of the human body or anatomical areas has gained importance in many medical specialties. Thus, it is important to determine scanner precision and accuracy in defined medical applications and to establish standards for the recording procedure. Here we evaluated the precision and accuracy of 3-D assessment of the facial area with the Minolta Vivid 910 3D Laser Scanner. We also investigated the influence of factors related to the recording procedure and the processing of scanner data on final results. These factors include lighting, alignment of scanner and object, the examiner, and the software used to convert measurements into virtual images. To assess scanner accuracy, we compared scanner data to those obtained by manual measurements on a dummy. Less than 7% of all results with the scanner method were outside a range of error of 2 mm when compared to corresponding reference measurements. Accuracy, thus, proved to be good enough to satisfy requirements for numerous clinical applications. Moreover, the experiments completed with the dummy yielded valuable information for optimizing recording parameters for best results. Thus, under defined conditions, precision and accuracy of surface models of the human face recorded with the Minolta Vivid 910 3D Scanner presumably can also be enhanced. Future studies will involve verification of our findings using test persons. The current findings indicate that the Minolta Vivid 910 3D Scanner might be used with benefit in medicine when recording the 3-D surface structures of the face.

  17. Facial expression recognition and subthalamic nucleus stimulation

    PubMed Central

    Schroeder, U; Kuehler, A; Hennenlotter, A; Haslinger, B; Tronnier, V; Krause, M; Pfister, R; Sprengelmeyer, R; Lange, K; Ceballos-Baumann, A

    2004-01-01

    Objective: To study the impact of STN stimulation in Parkinson's disease on perception of facial expressions. Results: There was a selective reduction in recognition of angry faces, but not other expressions, during STN stimulation. Conclusions: The findings may have important implications for social adjustment in these patients. PMID:15026519

  18. Biased Facial Expression Interpretation in Shy Children

    ERIC Educational Resources Information Center

    Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy

    2016-01-01

    The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…

  19. Emotional Empathy and Facial Mimicry for Static and Dynamic Facial Expressions of Fear and Disgust.

    PubMed

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the tendency to imitate the emotional facial expressions of others. Increasing evidence suggests that the perception of dynamic displays leads to enhanced facial mimicry, especially for happiness and anger. However, little is known about the impact of dynamic stimuli on facial mimicry for fear and disgust. To investigate this issue, facial EMG responses were recorded in the corrugator supercilii, levator labii, and lateral frontalis muscles, while participants viewed static (photos) and dynamic (videos) facial emotional expressions. Moreover, we tested whether emotional empathy modulated facial mimicry for emotional facial expressions. In accordance with our predictions, the highly empathic group responded with larger activity in the corrugator supercilii and levator labii muscles. Moreover, dynamic compared to static facial expressions of fear revealed enhanced mimicry in the high-empathic group in the frontalis and corrugator supercilii muscles. In the low-empathic group the facial reactions were not differentiated between fear and disgust for both dynamic and static facial expressions. We conclude that highly empathic subjects are more sensitive in their facial reactions to the facial expressions of fear and disgust compared to low empathetic counterparts. Our data confirms that personal characteristics, i.e., empathy traits as well as modality of the presented stimuli, modulate the strength of facial mimicry. In addition, measures of EMG activity of the levator labii and frontalis muscles may be a useful index of empathic responses of fear and disgust.

  20. Emotional Empathy and Facial Mimicry for Static and Dynamic Facial Expressions of Fear and Disgust

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the tendency to imitate the emotional facial expressions of others. Increasing evidence suggests that the perception of dynamic displays leads to enhanced facial mimicry, especially for happiness and anger. However, little is known about the impact of dynamic stimuli on facial mimicry for fear and disgust. To investigate this issue, facial EMG responses were recorded in the corrugator supercilii, levator labii, and lateral frontalis muscles, while participants viewed static (photos) and dynamic (videos) facial emotional expressions. Moreover, we tested whether emotional empathy modulated facial mimicry for emotional facial expressions. In accordance with our predictions, the highly empathic group responded with larger activity in the corrugator supercilii and levator labii muscles. Moreover, dynamic compared to static facial expressions of fear revealed enhanced mimicry in the high-empathic group in the frontalis and corrugator supercilii muscles. In the low-empathic group the facial reactions were not differentiated between fear and disgust for both dynamic and static facial expressions. We conclude that highly empathic subjects are more sensitive in their facial reactions to the facial expressions of fear and disgust compared to low empathetic counterparts. Our data confirms that personal characteristics, i.e., empathy traits as well as modality of the presented stimuli, modulate the strength of facial mimicry. In addition, measures of EMG activity of the levator labii and frontalis muscles may be a useful index of empathic responses of fear and disgust. PMID:27933022

  1. Stereoscopy Amplifies Emotions Elicited by Facial Expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Häkkinen, Jukka

    2015-12-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants' gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15-65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions.

  2. Stereoscopy Amplifies Emotions Elicited by Facial Expressions

    PubMed Central

    Kätsyri, Jari; Häkkinen, Jukka

    2015-01-01

    Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants’ gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15–65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions. PMID:27551358

  3. Computer-enhanced emotion in facial expressions.

    PubMed Central

    Calder, A J; Young, A W; Rowland, D; Perrett, D I

    1997-01-01

    Benson & Perrett's (1991 b) computer-based caricature procedure was used to alter the positions of anatomical landmarks in photographs of emotional facial expressions with respect to their locations in a reference norm face (e.g. a neutral expression). Exaggerating the differences between an expression and its norm produces caricatured images, whereas reducing the differences produces 'anti-caricatures'. Experiment 1 showed that caricatured (+50% different from neutral) expressions were recognized significantly faster than the veridical (0%, undistorted) expressions. This held for all six basic emotions from the Ekman & Friesen (1976) series, and the effect generalized across different posers. For experiment 2, caricatured (+50%) and anti-caricatured (-50%) images were prepared using two types of reference norm; a neutral-expression norm, which would be optimal if facial expression recognition involves monitoring changes in the positioning of underlying facial muscles, and a perceptually-based norm involving an average of the expressions of six basic emotions (excluding neutral) in the Ekman & Friesen (1976) series. The results showed that the caricatured images were identified significantly faster, and the anti-caricatured images significantly slower, than the veridical expressions. Furthermore, the neutral-expression and average-expression norm caricatures produced the same pattern of results. PMID:9265191

  4. Effect of sitting, standing, and supine body positions on facial soft tissue: detailed 3D analysis.

    PubMed

    Ozsoy, U; Sekerci, R; Ogut, E

    2015-10-01

    Medical imaging techniques require various body positions. Gravity causes changes in the facial soft tissue and acts in different directions according to the position of the head during imaging. The aim of this study was to evaluate the effect of positional changes on the facial soft tissue. The faces of subjects were scanned in the standing, sitting, and supine body positions. Differences in the positions were compared using the root mean square (RMS), mean absolute deviation (MAD), and mean signed distance (MSD). The displacement of 15 midsagittal and 20 bilateral landmarks was evaluated. The RMS, MAD, and MSD values of the sitting-standing comparison were significantly lower than those of the sitting-supine and standing-supine comparisons. There were no significant differences between the sitting-supine and standing-supine comparisons. Sixteen out of 135 measurements (12%) of the midsagittal landmarks and 94 out of 180 (52%) measurements of the bilateral landmarks showed significant displacements among the body positions. These results demonstrate a significant change in the facial soft tissue caused by body position. Furthermore, these data show the different susceptibilities of the facial soft tissue landmarks to the effect of body position along the x, y, and z axes.

  5. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  6. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  7. The 3D Tele Motion Tracking for the Orthodontic Facial Analysis

    PubMed Central

    Nota, Alessandro; Marchetti, Enrico; Padricelli, Giuseppe; Marzo, Giuseppe

    2016-01-01

    Aim. This study aimed to evaluate the reliability of 3D-TMT, previously used only for dynamic testing, in a static cephalometric evaluation. Material and Method. A group of 40 patients (20 males and 20 females; mean age 14.2 ± 1.2 years; 12–18 years old) was included in the study. The measurements obtained by the 3D-TMT cephalometric analysis with a conventional frontal cephalometric analysis were compared for each subject. Nine passive markers reflectors were positioned on the face skin for the detection of the profile of the patient. Through the acquisition of these points, corresponding plans for three-dimensional posterior-anterior cephalometric analysis were found. Results. The cephalometric results carried out with 3D-TMT and with traditional posterior-anterior cephalometric analysis showed the 3D-TMT system values are slightly higher than the values measured on radiographs but statistically significant; nevertheless their correlation is very high. Conclusion. The recorded values obtained using the 3D-TMT analysis were correlated to cephalometric analysis, with small but statistically significant differences. The Dahlberg errors resulted to be always lower than the mean difference between the 2D and 3D measurements. A clinician should use, during the clinical monitoring of a patient, always the same method, to avoid comparing different millimeter magnitudes. PMID:28044130

  8. The 3D Facial Norms Database: Part 1. A Web-Based Craniofacial Anthropometric and Image Repository for the Clinical and Research Community.

    PubMed

    Weinberg, Seth M; Raffensperger, Zachary D; Kesterke, Matthew J; Heike, Carrie L; Cunningham, Michael L; Hecht, Jacqueline T; Kau, Chung How; Murray, Jeffrey C; Wehby, George L; Moreno, Lina M; Marazita, Mary L

    2016-11-01

    With the current widespread use of three-dimensional (3D) facial surface imaging in clinical and research environments, there is a growing demand for high-quality craniofacial norms based on 3D imaging technology. The principal goal of the 3D Facial Norms (3DFN) project was to create an interactive, Web-based repository of 3D facial images and measurements. Unlike other repositories, users can gain access to both summary-level statistics and individual-level data, including 3D facial landmark coordinates, 3D-derived anthropometric measurements, 3D facial surface images, and genotypes from every individual in the dataset. The 3DFN database currently consists of 2454 male and female participants ranging in age from 3 to 40 years. The subjects were recruited at four US sites and screened for a history of craniofacial conditions. The goal of this article is to introduce readers to the 3DFN repository by providing a general overview of the project, explaining the rationale behind the creation of the database, and describing the methods used to collect the data. Sex- and age-specific summary statistics (means and standard deviations) and growth curves for every anthropometric measurement in the 3DFN dataset are provided as a supplement available online. These summary statistics and growth curves can aid clinicians in the assessment of craniofacial dysmorphology.

  9. Re-thinking 3D printing: A novel approach to guided facial contouring.

    PubMed

    Darwood, Alastair; Collier, Jonathan; Joshi, Naresh; Grant, William E; Sauret-Jackson, Veronique; Richards, Robin; Dawood, Andrew; Kirkpatrick, Niall

    2015-09-01

    Rapid prototyped or three dimensional printed (3D printed) patient specific guides are of great use in many craniofacial and maxillofacial procedures and are extensively described in the literature. These guides are relatively easy to produce and cost effective. However existing designs are limited in that they are unable to be used in procedures requiring the 3D contouring of patient tissues. This paper presents a novel design and approach for the use of three dimensional printing in the production of a patient specific guide capable of fully guiding intraoperative 3D tissue contouring based on a pre-operative plan. We present a case where the technique was used on a patient suffering from an extensive osseous tumour as a result of fibrous dysplasia with encouraging results.

  10. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  11. Categorical Perception of Affective and Linguistic Facial Expressions

    ERIC Educational Resources Information Center

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  12. Some Methods of Applied Numerical Analysis to 3d Facial Reconstruction Software

    NASA Astrophysics Data System (ADS)

    Roşu, Şerban; Ianeş, Emilia; Roşu, Doina

    2010-09-01

    This paper deals with the collective work performed by medical doctors from the University Of Medicine and Pharmacy Timisoara and engineers from the Politechnical Institute Timisoara in the effort to create the first Romanian 3d reconstruction software based on CT or MRI scans and to test the created software in clinical practice.

  13. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  14. 3-D Facial Landmark Localization With Asymmetry Patterns and Shape Regression from Incomplete Local Features.

    PubMed

    Sukno, Federico M; Waddington, John L; Whelan, Paul F

    2015-09-01

    We present a method for the automatic localization of facial landmarks that integrates nonrigid deformation with the ability to handle missing points. The algorithm generates sets of candidate locations from feature detectors and performs combinatorial search constrained by a flexible shape model. A key assumption of our approach is that for some landmarks there might not be an accurate candidate in the input set. This is tackled by detecting partial subsets of landmarks and inferring those that are missing, so that the probability of the flexible model is maximized. The ability of the model to work with incomplete information makes it possible to limit the number of candidates that need to be retained, drastically reducing the number of combinations to be tested with respect to the alternative of trying to always detect the complete set of landmarks. We demonstrate the accuracy of the proposed method in the face recognition grand challenge database, where we obtain average errors of approximately 3.5 mm when targeting 14 prominent facial landmarks. For the majority of these our method produces the most accurate results reported to date in this database. Handling of occlusions and surfaces with missing parts is demonstrated with tests on the Bosphorus database, where we achieve an overall error of 4.81 and 4.25 mm for data with and without occlusions, respectively. To investigate potential limits in the accuracy that could be reached, we also report experiments on a database of 144 facial scans acquired in the context of clinical research, with manual annotations performed by experts, where we obtain an overall error of 2.3 mm, with averages per landmark below 3.4 mm for all 14 targeted points and within 2 mm for half of them. The coordinates of automatically located landmarks are made available on-line.

  15. 3D mechanical modeling of facial soft tissue for surgery simulation.

    PubMed

    Mazza, Edoardo; Barbarino, Giuseppe Giovanni

    2011-11-01

    State of the art medical image acquisition, image analysis procedures and numerical calculation techniques are used to realize a computer model of the face capable of realistically represent the force-deformation characteristics of soft tissue. The model includes a representation of the superficial layers of the face (skin, superficial musculoaponeurotic system, fat), and most facial muscles. The whole procedure is illustrated for determining geometrical information, assigning mechanical properties to each soft tissue represented in the model, and validating model predictions based on a comparison with experimental observations. The capabilities, limitations and possible future use of this approach are discussed.

  16. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  17. Judgments of subtle facial expressions of emotion.

    PubMed

    Matsumoto, David; Hwang, Hyisung C

    2014-04-01

    Most studies on judgments of facial expressions of emotion have primarily utilized prototypical, high-intensity expressions. This paper examines judgments of subtle facial expressions of emotion, including not only low-intensity versions of full-face prototypes but also variants of those prototypes. A dynamic paradigm was used in which observers were shown a neutral expression followed by the target expression to judge, and then the neutral expression again, allowing for a simulation of the emergence of the expression from and then return to a baseline. We also examined how signal and intensity clarities of the expressions (explained more fully in the Introduction) were associated with judgment agreement levels. Low-intensity, full-face prototypical expressions of emotion were judged as the intended emotion at rates significantly greater than chance. A number of the proposed variants were also judged as the intended emotions. Both signal and intensity clarities were individually associated with agreement rates; when their interrelationships were taken into account, signal clarity independently predicted agreement rates but intensity clarity did not. The presence or absence of specific muscles appeared to be more important to agreement rates than their intensity levels, with the exception of the intensity of zygomatic major, which was positively correlated with agreement rates for judgments of joy.

  18. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  19. The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development.

    PubMed

    Spangler, Sibylle M; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.

  20. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    NASA Astrophysics Data System (ADS)

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  1. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  2. Sex differences in perception of invisible facial expressions.

    PubMed

    Hong, Sang Wook; Yoon, K Lira; Peaco, Sophia

    2015-01-01

    Previous research indicates that women are better at recognizing facial expressions than men. In the current study, we examined whether this female advantage in the processing of facial expressions also occurs at the unconscious level. In two studies, participants performed a simple detection task and a 4-AFC task while faces were rendered invisible by continuous flash suppression. When faces with full intensity expressions were suppressed, there was no significant sex difference in the time of breakup of suppression (Study 1). However, when suppressed faces depicted low intensity expressions, suppression broke up earlier in men than women, indicating that men may be more sensitive to facial features related to mild facial expressions (Study 2). The current findings suggest that the female advantage in processing of facial expressions is absent in unconscious processing of emotional information. The female advantage in facial expression processing may require conscious perception of faces.

  3. A new 3D method for measuring cranio-facial relationships with cone beam computed tomography (CBCT)

    PubMed Central

    Cibrián, Rosa; Gandia, Jose L.; Paredes, Vanessa

    2013-01-01

    Objectives: CBCT systems, with their high precision 3D reconstructions, 1:1 images and accuracy in locating cephalometric landmarks, allows us to evaluate measurements from craniofacial structures, so enabling us to replace the anthropometric methods or bidimensional methods used until now. The aims are to analyse cranio-facial relationships in a sample of patients who had previously undergone a CBCT and create a new 3D cephalometric method for assessing and measuring patients. Study Design: 90 patients who had a CBCT (i-Cat®) as a diagnostic register were selected. 12 cephalometric landmarks on the three spatial planes (X,Y,Z) were defined and 21 linear measurements were established. Using these measurements, 7 triangles were described and analysed. With the sides of the triangles: (CdR-Me-CdL); (FzR-Me-FzL); (GoR-N-GoL); and the Gl-Me distance, the ratios between them were analysed. In addition, 4 triangles in the mandible were measured (body: GoR-DB-Me and GoL-DB-Me and ramus: KrR-CdR-GoR and KrL-CdL-GoL). Results: When analyzing the sides of the CdR-Me-CdL triangle, it was found that the 69.33% of the patients could be considered symmetric. Regarding the ratios between the sides of the following triangles: CdR-Me-CdL, FzR-Me-FzL, GoR-N-GoL and the Gl-Me distance, it was found that almost all ratios were close to 1:1 except between the CdR-CdL side with respect the rest of the sides. With regard to the ratios of the 4 triangles of the mandible, it was found that the most symmetrical relationships were those corresponding to the sides of the body of the mandible and the most asymmetrical ones were those corresponding to the base of such triangles. Conclusions: A new method for assessing cranio-facial relationshps using CBCT has been established. It could be used for diverse purposes including diagnosis and treatment planning. Key words:Craniofacial relationship, CBCT, 3D cephalometry. PMID:23524427

  4. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    PubMed

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  5. Adults' responsiveness to children's facial expressions.

    PubMed

    Aradhye, Chinmay; Vonk, Jennifer; Arida, Danielle

    2015-07-01

    We investigated the effect of young children's (hereafter children's) facial expressions on adult responsiveness. In Study 1, 131 undergraduate students from a midsized university in the midwestern United States rated children's images and videos with smiling, crying, or neutral expressions on cuteness, likelihood to adopt, and participants' experienced distress. Looking times at images and videos along with perception of cuteness, likelihood to adopt, and experienced distress using 10-point Likert scales were measured. Videos of smiling children were rated as cuter and more likely to be adopted and were viewed for longer times compared with videos of crying children, which evoked more distress. In Study 2, we recorded responses from 101 of the same participants in an online survey measuring gender role identity, empathy, and perspective taking. Higher levels of femininity (as measured by Bem's Sex Role Inventory) predicted higher "likely to adopt" ratings for crying images. These findings indicate that adult perception of children and motivation to nurture are affected by both children's facial expressions and adult characteristics and build on existing literature to demonstrate that children may use expressions to manipulate the motivations of even non-kin adults to direct attention toward and perhaps nurture young children.

  6. The Neuropsychology of Facial Identity and Facial Expression in Children with Mental Retardation

    ERIC Educational Resources Information Center

    Singh, Nirbhay N.; Oswald, Donald P.; Lancioni, Giulio E.; Ellis, Cynthia R.; Sage, Monica; Ferris, Jennifer R.

    2005-01-01

    We indirectly determined how children with mental retardation analyze facial identity and facial expression, and if these analyses of identity and expression were controlled by independent cognitive processes. In a reaction time study, 20 children with mild mental retardation were required to determine if simultaneously presented photographs of…

  7. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    NASA Astrophysics Data System (ADS)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  8. Dynamic facial expressions are processed holistically, but not more holistically than static facial expressions.

    PubMed

    Tobin, Alanna; Favelle, Simone; Palermo, Romina

    2016-09-01

    There is evidence that facial expressions are perceived holistically and featurally. The composite task is a direct measure of holistic processing (although the absence of a composite effect implies the use of other types of processing). Most composite task studies have used static images, despite the fact that movement is an important aspect of facial expressions and there is some evidence that movement may facilitate recognition. We created static and dynamic composites, in which emotions were reliably identified from each half of the face. The magnitude of the composite effect was similar for static and dynamic expressions identified from the top half (anger, sadness and surprise) but was reduced in dynamic as compared to static expressions identified from the bottom half (fear, disgust and joy). Thus, any advantage in recognising dynamic over static expressions is not likely to stem from enhanced holistic processing, rather motion may emphasise or disambiguate diagnostic featural information.

  9. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  10. Deciphering the enigmatic face: the importance of facial dynamics in interpreting subtle facial expressions.

    PubMed

    Ambadar, Zara; Schooler, Jonathan W; Cohn, Jeffrey F

    2005-05-01

    Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.

  11. The Communicative Function of Sad Facial Expressions.

    PubMed

    Reed, Lawrence Ian; DeScioli, Peter

    2017-01-01

    What are the communicative functions of sad facial expressions? Research shows that people feel sadness in response to losses but it's unclear whether sad expressions function to communicate losses to others and if so, what makes these signals credible. Here we use economic games to test the hypothesis that sad expressions lend credibility to claims of loss. Participants play the role of either a proposer or recipient in a game with a fictional backstory and real monetary payoffs. The proposers view a (fictional) video of the recipient's character displaying either a neutral or sad expression paired with a claim of loss. The proposer then decided how much money to give to the recipient. In three experiments, we test alternative theories by using situations in which the recipient's losses were uncertain (Experiment 1), the recipient's losses were certain (Experiment 2), or the recipient claims failed gains rather than losses (Experiment 3). Overall, we find that participants gave more money to recipients who displayed sad expressions compared to neutral expressions, but only under conditions of uncertain loss. This finding supports the hypothesis that sad expressions function to increase the credibility of claims of loss.

  12. Face processing in children with autism spectrum disorder: independent or interactive processing of facial identity and facial expression?

    PubMed

    Krebs, Julia F; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-06-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity disregarding emotional expression. Typically developing children processed facial identity independently from facial expressions but processed facial expressions in interaction with identity. Children with autism processed both facial expression and identity independently of each other. They selectively directed their attention to one facial parameter despite variations in the other. Results indicate that there is no interaction in processing facial identity and emotional expression in autism spectrum disorder.

  13. From facial expressions to bodily gestures

    PubMed Central

    2016-01-01

    This article aims to determine to what extent photographic practices in psychology, psychiatry and physiology contributed to the definition of the external bodily signs of passions and emotions in the second half of the 19th century in France. Bridging the gap between recent research in the history of emotions and photographic history, the following analyses focus on the photographic production of scientists and photographers who made significant contributions to the study of expressions and gestures, namely Duchenne de Boulogne, Charles Darwin, Paul Richer and Albert Londe. This article argues that photography became a key technology in their works due to the adequateness of the exposure time of different cameras to the duration of the bodily manifestations to be recorded, and that these uses constituted facial expressions and bodily gestures as particular objects for the scientific study. PMID:26900264

  14. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently.

  15. Automatic decoding of facial movements reveals deceptive pain expressions.

    PubMed

    Bartlett, Marian Stewart; Littlewort, Gwen C; Frank, Mark G; Lee, Kang

    2014-03-31

    In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

  16. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  17. Processing of facial emotion expression in major depression: a review.

    PubMed

    Bourke, Cecilia; Douglas, Katie; Porter, Richard

    2010-08-01

    Processing of facial expressions of emotion is central to human interaction, and has important effects on behaviour and affective state. A range of methods and paradigms have been used to investigate various aspects of abnormal processing of facial expressions in major depression, including emotion specific deficits in recognition accuracy, response biases and attentional biases. The aim of this review is to examine and interpret data from studies of facial emotion processing in major depression, in the context of current knowledge about the neural correlates of facial expression processing of primary emotions. The review also discusses the methodologies used to examine facial expression processing. Studies of facial emotion processing and facial emotion recognition were identified up to December 2009 utilizing MEDLINE and Web of Science. Although methodological variations complicate interpretation of findings, there is reasonably consistent evidence of a negative response bias towards sadness in individuals with major depression, so that positive (happy), neutral or ambiguous facial expressions tend to be evaluated as more sad or less happy compared with healthy control groups. There is also evidence of increased vigilance and selective attention towards sad expressions and away from happy expressions, but less evidence of reduced general or emotion-specific recognition accuracy. Data is complicated by the use of multiple paradigms and the heterogeneity of major depression. Future studies should address methodological problems, including variations in patient characteristics, testing paradigms and procedures, and statistical methods used to analyse findings.

  18. The face is not an empty canvas: how facial expressions interact with facial appearance

    PubMed Central

    Hess, Ursula; Adams, Reginald B.; Kleck, Robert E.

    2009-01-01

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions. PMID:19884144

  19. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  20. Dielectric elastomer actuators for facial expression

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhe; Zhu, Jian

    2016-04-01

    Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.

  1. Development and reproducibility of a 3D stereophotogrammetric reference frame for facial soft tissue growth of babies and young children with and without orofacial clefts.

    PubMed

    Brons, S; van Beusichem, M E; Maal, T J J; Plooij, J M; Bronkhorst, E M; Bergé, S J; Kuijpers-Jagtman, A M

    2013-01-01

    The aim of this study was to develop a reference frame for three dimensional (3D) facial soft tissue growth analysis in children and to determine its reproducibility. Two observers twice placed the reference frame on 39 3D-stereophotogrammetry facial images of children with orofacial clefts and control children. The observers' performances were analyzed by calculating mean distance, distance variability, and P95 between the same facial surfaces at two different time points. Correlations between observers were analyzed with Pearson's correlation coefficient. The influence of presence of a cleft, absence of one ear in the photograph, and age on the reproducibility of the reference frame was checked using Student's t test. Results of intraobserver comparisons showed a mean distance of <0.40 mm, distance variability of <0.51 mm, and P95 of <0.80 mm. For interobserver reliability, the mean distance was <0.52 mm, distance variability was <0.53 mm, and P95 was <1.10 mm. Presence of a cleft, age, and absence of one ear on the 3D photograph did not have a significant influence on the reproducibility of placing the reference frame. The children's reference frame is a reproducible method to superimpose on 3D soft tissue stereophotogrammetry photographs of growing individuals with and without orofacial clefts.

  2. The Not Face: A grammaticalization of facial expressions of emotion

    PubMed Central

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  3. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.

  4. Parameterized Facial Expression Synthesis Based on MPEG-4

    NASA Astrophysics Data System (ADS)

    Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos

    2002-12-01

    In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

  5. Understanding Facial Expressions of Pain in Patients With Depression.

    PubMed

    Lautenbacher, Stefan; Bär, Karl-Juergen; Eisold, Patricia; Kunz, Miriam

    2016-12-02

    Although depression is associated with more clinical pain complaints, psychophysical data sometimes point to hypoalgesic alterations. Studying the more reflex-like facial expression of pain in patients with depression may offer a new perspective. Facial and psychophysical responses to nonpainful and painful heat stimuli were studied in 23 patients with major depressive disorder (MDD) and 23 matched control participants. As psychophysical data, pain thresholds, tolerance thresholds, and self-report were assessed. Facial responses were videotaped and subjected offline to Facial Action Coding System analysis. One of the key facial responses of pain, which is a known facial signal of negative affect (contraction of the eyebrows), was significantly increased in MDD patients. Moreover, facial expressions and pain ratings were strongly correlated in MDD patients, whereas these 2 response systems were-in line with established findings-only weakly related in healthy participants. Pain psychophysics was unaltered in MDD patients compared with healthy control participants. In conclusion, the facial expression of pain in MDD patients indicates rather hyper- than hypoalgesia, with enhanced affective pain processing. Moreover, the linkage between subjective and facial responses was much stronger in MDD patients, which may be due to a reduced influence of social display rules, which normally complicate this relationship.

  6. Modelling of facial growth in Czech children based on longitudinal data: Age progression from 12 to 15 years using 3D surface models.

    PubMed

    Koudelová, Jana; Dupej, Ján; Brůžek, Jaroslav; Sedlak, Petr; Velemínská, Jana

    2015-03-01

    Dealing with the increasing number of long-term missing children and juveniles requires more precise and objective age progression techniques for the prediction of their current appearance. Our contribution includes detailed and real facial growth information used for modelling age progression during adolescence. This study was based on an evaluation of the overall 180 three-dimensional (3D) facial scans of Czech children (23 boys, 22 girls), which were longitudinally studied from 12 to 15 years of age and thus revealed the real growth-related changes. The boys underwent more marked changes compared with the girls, especially in the regions of the eyebrow ridges, nose and chin. Using modern geometric morphometric methods, together with their applications, we modelled the ageing and allometric trajectories for both sexes and simulated the age-progressed effects on facial scans. The facial parts that are important for facial recognition (eyes, nose, mouth and chin) all deviated less than 0.75mm, whereas the areas with the largest deviations were situated on the marginal parts of the face. The mean error between the predicted and real facial morphology obtained by modelling the children from 12 to 15 years of age was 1.92mm in girls and 1.86mm in boys. This study is beneficial for forensic artists as it reduces the subjectivity of age progression methods.

  7. Measuring the speed of recognising facially expressed emotions.

    PubMed

    Hildebrandt, Andrea; Schacht, Annekathrin; Sommer, Werner; Wilhelm, Oliver

    2012-01-01

    Faces provide identity- and emotion-related information-basic cues for mastering social interactions. Traditional models of face recognition suggest that following a very first initial stage the processing streams for facial identity and expression depart. In the present study we extended our previous multivariate investigations of face identity processing abilities to the speed of recognising facially expressed emotions. Analyses are based on a sample of N=151 young adults. First, we established a measurement model with a higher order factor for the speed of recognising facially expressed emotions (SRE). This model has acceptable fit without specifying emotion-specific relations between indicators. Next, we assessed whether SRE can be reliably distinguished from the speed of recognising facial identity (SRI) and found latent factors for SRE and SRI to be perfectly correlated. In contrast, SRE and SRI were both only moderately related to a latent factor for the speed of recognising non-face stimuli (SRNF). We conclude that the processing of facial stimuli-and not the processing of facially expressed basic emotions-is the critical component of SRE. These findings are at variance with suggestions of separate routes for processing facial identity and emotional facial expressions and suggest much more communality between these streams as far as the aspect of processing speed is concerned.

  8. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  9. The own-sex effect in facial expression recognition.

    PubMed

    Doi, Hirokazu; Amamoto, Takaaki; Okishige, Yuuka; Kato, Mikako; Shinohara, Kazuyuki

    2010-06-02

    Responses to smiling and nonsmiling expressions are influenced by sex of both viewer and expresser. This study investigated the stage of neural processing at which the sexes of viewer and expresser modulate the recognition of smiling and nonsmiling expressions by measuring event-related potentials. The results showed that late positive component was larger to neutral expression of own-sex faces than to that of opposite-sex faces. These results indicate that neural correlates of facial expression recognition are influenced by the sexes of both viewer and expresser of facial expression at the stage of cognitive evaluation.

  10. Perception of temporal asymmetries in dynamic facial expressions

    PubMed Central

    Reinl, Maren; Bartels, Andreas

    2015-01-01

    In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline). This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline-reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioral as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries. PMID:26300807

  11. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  12. Visualization and analysis of 3D gene expression patterns in zebrafish using web services

    NASA Astrophysics Data System (ADS)

    Potikanond, D.; Verbeek, F. J.

    2012-01-01

    The analysis of patterns of gene expression patterns analysis plays an important role in developmental biology and molecular genetics. Visualizing both quantitative and spatio-temporal aspects of gene expression patterns together with referenced anatomical structures of a model-organism in 3D can help identifying how a group of genes are expressed at a certain location at a particular developmental stage of an organism. In this paper, we present an approach to provide an online visualization of gene expression data in zebrafish (Danio rerio) within 3D reconstruction model of zebrafish in different developmental stages. We developed web services that provide programmable access to the 3D reconstruction data and spatial-temporal gene expression data maintained in our local repositories. To demonstrate this work, we develop a web application that uses these web services to retrieve data from our local information systems. The web application also retrieve relevant analysis of microarray gene expression data from an external community resource; i.e. the ArrayExpress Atlas. All the relevant gene expression patterns data are subsequently integrated with the reconstruction data of the zebrafish atlas using ontology based mapping. The resulting visualization provides quantitative and spatial information on patterns of gene expression in a 3D graphical representation of the zebrafish atlas in a certain developmental stage. To deliver the visualization to the user, we developed a Java based 3D viewer client that can be integrated in a web interface allowing the user to visualize the integrated information over the Internet.

  13. Contextual interference processing during fast categorisations of facial expressions.

    PubMed

    Frühholz, Sascha; Trautmann-Lengsfeld, Sina A; Herrmann, Manfred

    2011-09-01

    We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.

  14. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  15. Automatic Facial Expression Recognition and Operator Functional State

    NASA Technical Reports Server (NTRS)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  16. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    ERIC Educational Resources Information Center

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  17. In vivo biomarker expression patterns are preserved in 3D cultures of Prostate Cancer

    SciTech Connect

    Windus, Louisa C.E.; Kiss, Debra L.; Glover, Tristan; Avery, Vicky M.

    2012-11-15

    Here we report that Prostate Cancer (PCa) cell-lines DU145, PC3, LNCaP and RWPE-1 grown in 3D matrices in contrast to conventional 2D monolayers, display distinct differences in cell morphology, proliferation and expression of important biomarker proteins associated with cancer progression. Consistent with in vivo growth rates, in 3D cultures, all PCa cell-lines were found to proliferate at significantly lower rates in comparison to their 2D counterparts. Moreover, when grown in a 3D matrix, metastatic PC3 cell-lines were found to mimic more precisely protein expression patterns of metastatic tumour formation as found in vivo. In comparison to the prostate epithelial cell-line RWPE-1, metastatic PC3 cell-lines exhibited a down-regulation of E-cadherin and {alpha}6 integrin expression and an up-regulation of N-cadherin, Vimentin and {beta}1 integrin expression and re-expressed non-transcriptionally active AR. In comparison to the non-invasive LNCaP cell-lines, PC3 cells were found to have an up-regulation of chemokine receptor CXCR4, consistent with a metastatic phenotype. In 2D cultures, there was little distinction in protein expression between metastatic, non-invasive and epithelial cells. These results suggest that 3D cultures are more representative of in vivo morphology and may serve as a more biologically relevant model in the drug discovery pipeline. -- Highlights: Black-Right-Pointing-Pointer We developed and optimised 3D culturing techniques for Prostate Cancer cell-lines. Black-Right-Pointing-Pointer We investigated biomarker expression in 2D versus 3D culture techniques. Black-Right-Pointing-Pointer Metastatic PC3 cells re-expressed non-transcriptionally active androgen receptor. Black-Right-Pointing-Pointer Metastatic PCa cell lines retain in vivo-like antigenic profiles in 3D cultures.

  18. Weapon identification using antemortem CT with 3D reconstruction, is it always possible?--A report in a case of facial blunt and sharp injuries using an ashtray.

    PubMed

    Aromatario, Mariarosaria; Cappelletti, Simone; Bottoni, Edoardo; Fiore, Paola Antonella; Ciallella, Costantino

    2016-01-01

    An interesting case of homicide involving the use of a heavy glass ashtray is described. The victim, a 81-years-old woman, has survived for few days and died in hospital. The external examination of the victim showed extensive blunt and sharp facial injuries and defense injuries on both the hands. The autopsy examination showed numerous tears on the face, as well as multiple fractures of the facial bones. Computer tomography scan, with 3D reconstruction, performed in hospital before death, was used to identify the weapon used for the crime. In recent years new diagnostics tools such as computer tomography has been widely used, especially in cases involving sharp and blunt forces. Computer tomography has proven to be very valuable in analyzing fractures of the cranial teca for forensic purpose, in particular antemortem computer tomography with 3D reconstruction is becoming an important tool in the process of weapon identification, thanks to the possibility to identify and make comparison between the shape of the object used to commit the crime, the injury and the objects found during the investigations. No previous reports on the use of this technique, for the weapon identification process, in cases of isolated facial fractures were described. We report a case in which, despite the correct use of this technique, it was not possible for the forensic pathologist to identify the weapon used to commit the crime. Authors wants to highlight the limits encountered in the use of computer tomography with 3D reconstruction as a tool for weapon identification when facial fractures occurred.

  19. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.

  20. NON-INVASIVE 3D FACIAL ANALYSIS AND SURFACE ELECTROMYOGRAPHY DURING FUNCTIONAL PRE-ORTHODONTIC THERAPY: A PRELIMINARY REPORT

    PubMed Central

    Tartaglia, Gianluca M.; Grandi, Gaia; Mian, Fabrizio; Sforza, Chiarella; Ferrario, Virgilio F.

    2009-01-01

    Objectives: Functional orthodontic devices can modify oral function thus permitting more adequate growth processes. The assessment of their effects should include both facial morphology and muscle function. This preliminary study investigated whether a preformed functional orthodontic device could induce variations in facial morphology and function along with correction of oral dysfunction in a group of orthodontic patients in the mixed and early permanent dentitions. Material and Methods: The three-dimensional coordinates of 50 facial landmarks (forehead, eyes, nose, cheeks, mouth, jaw and ears) were collected in 10 orthodontic male patients aged 8-13 years, and in 89 healthy reference boys of the same age. Soft tissue facial angles, distances, and ratios were computed. Surface electromyography of the masseter and temporalis muscles was performed, and standardized symmetry, muscular torque and activity were calculated. Soft-tissue facial modifications were analyzed non-invasively before and after a 6-month treatment with a functional device. Comparisons were made with z-scores and paired Student's t-tests. Results: The 6-month treatment stimulated mandibular growth in the anterior and inferior directions, with significant variations in three-dimensional facial divergence and facial convexity. The modifications were larger in the patients than in reference children. In several occasions, the discrepancies relative to the norm became not significant after treatment. No significant variations in standardized muscular activity were found. Conclusions: Preliminary results showed that the continuous and correct use of the functional device induced measurable intraoral (dental arches) and extraoral (face) morphological modifications. The device did not modify the functional equilibrium of the masticatory muscles. PMID:19936531

  1. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  2. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.

  3. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  4. Looking with different eyes: The psychological meaning of categorisation goals moderates facial reactivity to facial expressions.

    PubMed

    van Dillen, Lotte F; Harris, Lasana T; van Dijk, Wilco W; Rotteveel, Mark

    2015-01-01

    In the present research we examined whether the psychological meaning of people's categorisation goals affects facial muscle activity in response to facial expressions of emotion. We had participants associate eye colour (blue, brown) with either a personality trait (extraversion) or a physical trait (light frequency) and asked them to use these associations in a speeded categorisation task of angry, disgusted, happy and neutral faces while assessing participants' response times and facial muscle activity. We predicted that participants would respond differentially to the emotional faces when the categorisation criteria allowed for inferences about a target's thoughts, feelings or behaviour (i.e., when categorising extraversion), but not when these lacked any social meaning (i.e., when categorising light frequency). Indeed, emotional faces triggered facial reactions to facial expressions when participants categorised extraversion, but not when they categorised light frequency. In line with this, only when categorising extraversion did participants' response times indicate a negativity bias replicating previous results. Together, these findings provide further evidence for the contextual nature of people's selective responses to the emotions expressed by others.

  5. Morphologic Analysis of the Temporomandibular Joint Between Patients With Facial Asymmetry and Asymptomatic Subjects by 2D and 3D Evaluation

    PubMed Central

    Zhang, Yuan-Li; Song, Jin-Lin; Xu, Xian-Chao; Zheng, Lei-Lei; Wang, Qing-Yuan; Fan, Yu-Bo; Liu, Zhan

    2016-01-01

    Abstract Signs and symptoms of temporomandibular joint (TMJ) dysfunction are commonly found in patients with facial asymmetry. Previous studies on the TMJ position have been limited to 2-dimensional (2D) radiographs, computed tomography (CT), or cone-beam computed tomography (CBCT). The purpose of this study was to compare the differences of TMJ position by using 2D CBCT and 3D model measurement methods. In addition, the differences of TMJ positions between patients with facial asymmetry and asymptomatic subjects were investigated. We prospectively recruited 5 patients (cases, mean age, 24.8 ± 2.9 years) diagnosed with facial asymmetry and 5 asymptomatic subjects (controls, mean age, 26 ± 1.2 years). The TMJ spaces, condylar and ramus angles were assessed by using 2D and 3D methods. The 3D models of mandible, maxilla, and teeth were reconstructed with the 3D image software. The variables in each group were assessed by t-test and the level of significance was 0.05. There was a significant difference in the horizontal condylar angle (HCA), coronal condylar angle (CCA), sagittal ramus angle (SRA), medial joint space (MJS), lateral joint space (LJS), superior joint space (SJS), and anterior joint space (AJS) measured in the 2D CBCT and in the 3D models (P < 0.05). The case group had significantly smaller SJS compared to the controls on both nondeviation side (P = 0.009) and deviation side (P = 0.004). In the case group, the nondeviation SRA was significantly larger than the deviation side (P = 0.009). There was no significant difference in the coronal condylar width (CCW) in either group. In addition, the anterior disc displacement (ADD) was more likely to occur on the deviated side in the case group. In conclusion, the 3D measurement method is more accurate and effective for clinicians to investigate the morphology of TMJ than the 2D method. PMID:27043669

  6. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    PubMed

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability.

  7. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    ERIC Educational Resources Information Center

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  8. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  9. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women

    PubMed Central

    Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects’ Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and

  10. Unseen facial and bodily expressions trigger fast emotional reactions

    PubMed Central

    Tamietto, Marco; Castelli, Lorys; Vighetti, Sergio; Perozzo, Paola; Geminiani, Giuliano; Weiskrantz, Lawrence; de Gelder, Beatrice

    2009-01-01

    The spontaneous tendency to synchronize our facial expressions with those of others is often termed emotional contagion. It is unclear, however, whether emotional contagion depends on visual awareness of the eliciting stimulus and which processes underlie the unfolding of expressive reactions in the observer. It has been suggested either that emotional contagion is driven by motor imitation (i.e., mimicry), or that it is one observable aspect of the emotional state arising when we see the corresponding emotion in others. Emotional contagion reactions to different classes of consciously seen and “unseen” stimuli were compared by presenting pictures of facial or bodily expressions either to the intact or blind visual field of two patients with unilateral destruction of the visual cortex and ensuing phenomenal blindness. Facial reactions were recorded using electromyography, and arousal responses were measured with pupil dilatation. Passive exposure to unseen expressions evoked faster facial reactions and higher arousal compared with seen stimuli, therefore indicating that emotional contagion occurs also when the triggering stimulus cannot be consciously perceived because of cortical blindness. Furthermore, stimuli that are very different in their visual characteristics, such as facial and bodily gestures, induced highly similar expressive responses. This shows that the patients did not simply imitate the motor pattern observed in the stimuli, but resonated to their affective meaning. Emotional contagion thus represents an instance of truly affective reactions that may be mediated by visual pathways of old evolutionary origin bypassing cortical vision while still providing a cornerstone for emotion communication and affect sharing. PMID:19805044

  11. Training Facial Expression Production in Children on the Autism Spectrum

    ERIC Educational Resources Information Center

    Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.

    2014-01-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…

  12. Comparison of emotion recognition from facial expression and music.

    PubMed

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  13. [The neural networks of facial expression].

    PubMed

    Gordillo, F; Mestas, L; Castillo, G; Perez, M A; Lopez, R M; Arana, J M

    2017-02-01

    Introduccion. La percepcion de caras involucra una amplia red de conexiones entre regiones corticales y subcorticales que intercambian y sincronizan informacion a traves de haces de sustancia blanca. Este preciso sistema de comunicacion puede verse afectado tanto a traves de las propias estructuras como por las vias que las conectan. Objetivos. Delimitar el sustrato neuronal que subyace a la percepcion de la expresion facial y analizar los diferentes factores que participan modulando la integridad de esta red neuronal, con el fin de proponer mejoras en los programas de rehabilitacion. Desarrollo. Cuando la compleja red de conexiones que participa en la percepcion de la expresion facial se altera por traumatismos, patologias neurodegenerativas, trastornos del desarrollo, incluso por aislamiento social o contextos negativos, se deteriora tambien la capacidad para interactuar de manera adaptativa con el entorno. Conclusiones. La posibilidad de restaurar la integridad de la red neuronal encargada del procesamiento de la expresion facial pasa por tener en cuenta diferentes variables que en mayor o menor grado se han mostrado capaces de modificar la estructura o funcionalidad de las redes neuronales, como el entrenamiento aerobico, la estimulacion magnetica transcraneal, la estimulacion electrica transcraneal y el aprendizaje, sin bien estas variables estarian condicionadas por la edad, el tipo y evolucion del trastorno o el contexto generador, lo que plantearia la necesidad de protocolos de rehabilitacion ajustados y orientados a delimitar el sustrato neuronal del deficit.

  14. Warsaw set of emotional facial expression pictures: a validation study of facial display photographs

    PubMed Central

    Olszanowski, Michal; Pochwatko, Grzegorz; Kuklinski, Krzysztof; Scibor-Rylski, Michal; Lewinski, Peter; Ohme, Rafal K.

    2015-01-01

    Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs of genuine facial expressions. Thus, after proper training, participants were inclined to express “felt” emotions. The novel approach taken in this study was also used to establish whether a given expression was perceived as intended by untrained judges. The judgment task for perceivers was designed to be sensitive to subtle changes in meaning caused by the way an emotional display was evoked and expressed. Consequently, this allowed us to measure the purity and intensity of emotional displays, which are parameters that validation methods used by other researchers do not capture. The final set is comprised of those pictures that received the highest recognition marks (e.g., accuracy with intended display) from independent judges, totaling 210 high quality photographs of 30 individuals. Descriptions of the accuracy, intensity, and purity of displayed emotion as well as FACS AU's codes are provided for each picture. Given the unique methodology applied to gathering and validating this set of pictures, it may be a useful tool for research using face stimuli. The Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) is freely accessible to the scientific community for non-commercial use by request at http://www.emotional-face.org. PMID:25601846

  15. Training facial expression production in children on the autism spectrum.

    PubMed

    Gordon, Iris; Pierce, Matthew D; Bartlett, Marian S; Tanaka, James W

    2014-10-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer recognition system that analyzes the child's facial expression in real time. Before and after playing the Angry and Happy versions of FaceMaze, children posed "happy" and "angry" expressions. Naïve raters judged the post-FaceMaze "happy" and "angry" expressions of the ASD group as higher in quality than their pre-FaceMaze productions. Moreover, the post-game expressions of the ASD group were rated as equal in quality as the expressions of the TD group.

  16. Visual decodification of some facial expressions through microimitation.

    PubMed

    Ruggieri, V; Fiorenza, M; Sabatini, N

    1986-04-01

    We examined the level of muscular tension of mentalis muscle of 36 students in graphic design at rest and during the presentation of three slides reproducing facial expressions. Analysis showed an increase in the myographic level of mentalis muscle from the third second of measurement onwards after the presentation of the slide in which contraction of the chin was involved. We interpret this result by hypothesizing that the decodification of some facial expressions is realized through a microreproduction of the stimulus from the decodifying subject.

  17. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    NASA Astrophysics Data System (ADS)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  18. Does the Organization of Emotional Expression Change over Time? Facial Expressivity from 4 to 12 Months

    ERIC Educational Resources Information Center

    Bennett, David S.; Bendersky, Margaret; Lewis, Michael

    2005-01-01

    Differentiation models contend that the organization of facial expressivity increases during infancy. Accordingly, infants are believed to exhibit increasingly specific facial expressions in response to stimuli as a function of development. This study tested this hypothesis in a sample of 151 infants (83 boys and 68 girls) observed in 4 situations…

  19. Increased lipid accumulation and adipogenic gene expression of adipocytes in 3D bioprinted nanocellulose scaffolds.

    PubMed

    Henriksson, I; Gatenholm, P; Hägg, D A

    2017-02-21

    Compared to standard 2D culture systems, new methods for 3D cell culture of adipocytes could provide more physiologically accurate data and a deeper understanding of metabolic diseases such as diabetes. By resuspending living cells in a bioink of nanocellulose and hyaluronic acid, we were able to print 3D scaffolds with uniform cell distribution. After one week in culture, cell viability was 95%, and after two weeks the cells displayed a more mature phenotype with larger lipid droplets than standard 2D cultured cells. Unlike cells in 2D culture, the 3D bioprinted cells did not detach upon lipid accumulation. After two weeks, the gene expression of the adipogenic marker genes PPARγ and FABP4 was increased 2.0- and 2.2-fold, respectively, for cells in 3D bioprinted constructs compared with 2D cultured cells. Our 3D bioprinted culture system produces better adipogenic differentiation of mesenchymal stem cells and a more mature cell phenotype than conventional 2D culture systems.

  20. Emotional facial expressions reduce neural adaptation to face identity.

    PubMed

    Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R

    2014-05-01

    In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.

  1. Processing emotional facial expressions: The role of anxiety and awareness

    PubMed Central

    FOX, ELAINE

    2012-01-01

    In this paper, the role of self-reported anxiety and degree of conscious awareness as determinants of the selective processing of affective facial expressions is investigated. In two experiments, an attentional bias toward fearful facial expressions was observed, although this bias was apparent only for those reporting high levels of trait anxiety and only when the emotional face was presented in the left visual field. This pattern was especially strong when the participants were unaware of the presence of the facial stimuli. In Experiment 3, a patient with right-hemisphere brain damage and visual extinction was presented with photographs of faces and fruits on unilateral and bilateral trials. On bilateral trials, it was found that faces produced less extinction than did fruits. Moreover, faces portraying a fearful or a happy expression tended to produce less extinction than did neutral expressions. This suggests that emotional facial expressions may be less dependent on attention to achieve awareness.The implications of these results for understanding the relations between attention, emotion, and anxiety are discussed. PMID:12452584

  2. Neonatal pain facial expression: evaluating the primal face of pain.

    PubMed

    Schiavenato, Martin; Byers, Jacquie F; Scovanner, Paul; McMahon, James M; Xia, Yinglin; Lu, Naiji; He, Hua

    2008-08-31

    The primal face of pain (PFP) is postulated to be a common and universal facial expression to pain, hardwired and present at birth. We evaluated its presence by applying a computer-based methodology consisting of "point-pair" comparisons captured from video to measure facial movement in the pain expression by way of change across two images: one image before and one image after a painful stimulus (heel-stick). Similarity of facial expression was analyzed in a sample of 57 neonates representing both sexes and 3 ethnic backgrounds (African American, Caucasian and Hispanic/Latino) while controlling for these extraneous and potentially modulating factors: feeding type (bottle, breast, or both), behavioral state (awake or asleep), and use of epidural and/or other perinatal anesthesia. The PFP is consistent with previous reports of expression of pain in neonates and is characterized by opening of the mouth, drawing in of the brows, and closing of the eyes. Although facial expression was not identical across or among groups, our analyses showed no particular clustering or unique display by sex, or ethnicity. The clinical significance of this commonality of pain display, and of the origin of its potential individual variation begs further evaluation.

  3. Exploring emotional and cognitive conflict using speeded voluntary facial expressions.

    PubMed

    Chiew, Kimberly S; Braver, Todd S

    2010-12-01

    Affective conflict and control may have important parallels to cognitive conflict and control, but these processes have been difficult to quantitatively study with emotionally naturalistic laboratory paradigms. The current study examines a modification of the AX-Continuous Performance Task (AX-CPT), a well-validated probe of cognitive conflict and control, for the study of emotional conflict. In the Emotional AX-CPT, speeded emotional facial expressions measured with electromyography (EMG) were used as the primary response modality, and index of emotional conflict. Bottom-up emotional conflict occurred on trials in which precued facial expressions were incongruent with the valence of an emotionally evocative picture probe (e.g., smiling to a negative picture). A second form of top-down conflict occurred in which the facial expression and picture probe were congruent, but the opposite expression was expected based on the precue. A matched version of the task was also performed (in a separate group of participants) with affectively neutral probe stimuli. Behavioral interference was observed, in terms of response latencies and errors, on all conflict trials. However, bottom-up conflict was stronger in the emotional version of the task compared to the neutral version; top-down conflict was similar across the two versions. The results suggest that voluntary facial expressions may be more sensitive to indexing emotional than nonemotional conflict, and importantly, may provide an ecologically valid method of examining how emotional conflict may manifest in behavior and brain activity.

  4. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars.

    PubMed

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-06-18

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor.

  5. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  6. Microscale technologies for imaging endogenous gene expression in individual cells within 3D tissues

    NASA Astrophysics Data System (ADS)

    Ye, Ting; Luo, Zhen; Ma, Yunzhe; Gill, Harvinder Singh; Nitin, N.

    2013-05-01

    The goal of this study was to develop an innovative approach to image gene expression in intact 3D tissues. Imaging gene expression of individual cells in 3D tissues is expected to have a significant impact on both clinical diagnostic applications and fundamental biological science and engineering applications in a laboratory setting. To achieve this goal, we have developed an integrated approach that combines: 1) microneedle-based minimally invasive intra-tissue delivery of oligonucleotide probes and Streptolysin O (SLO) or CPP; 2) SLO as a pore forming permeation enhancer to enable intracellular delivery of oligonucleotide probes and CPP peptides can also transport conjugated cargo in cells; and 3) fluorescence resonance energy transfer (FRET) pair of ON probes to improve specificity and sensitivity of RNA detection in tissue models. The results of this study demonstrate uniform coating and rapid release of ON probes from microneedles in a tissue environment. Microneedle assisted delivery of ON probes in 3D tissue does not result in cell damage and the ON probes are uniformly delivered in the tissue. The results also demonstrate the feasibility of FRET imaging of ON probes in 3D tissue and highlight the potential for imaging 28-s rRNA in individual living cells.

  7. 3D printed facial laser scans for the production of localised radiotherapy treatment masks - A case study.

    PubMed

    Briggs, Matthew; Clements, Helen; Wynne, Neil; Rennie, Allan; Kellett, Darren

    This study investigates the use of 3D printing for patients that require localised radiotherapy treatment to the face. The current process involves producing a lead mask in order to protect the healthy tissue from the effects of the radiotherapy. The mask is produced by applying a thermoplastic sheet to the patient's face and allowing to set hard. This can then be used as a mould to create a plaster impression of the patient's face. A sheet of lead is then hammered on to the plaster to create a bespoke fitted face mask. This process can be distressing for patients and can be problematic when the patient is required to remain motionless for a prolonged time while the thermoplastic sets. In this study, a 1:1 scale 3D print of a patient's face was generated using a laser scanner. The lead was hammered directly on to the surface of the 3D print in order to create a bespoke fitted treatment mask. This eliminated the thermoplastic moulding stage and significantly reduced the time needed for the patient to be in clinic. The higher definition impression of the the face resulted in a more accurate, better fitting treatment mask.

  8. Neuropsychological Studies of Linguistic and Affective Facial Expressions in Deaf Signers.

    ERIC Educational Resources Information Center

    Corina, David P.; Bellugi, Ursula; Reilly, Judy

    1999-01-01

    Presents two studies that explore facial expression production in deaf signers. An experimental paradigm uses chimeric stimuli of American Sign Language linguistic and facial expressions to explore patterns of productive asymmetries in brain-intact signers. (Author/VWL)

  9. Processing of emotional facial expressions in Korsakoff's syndrome.

    PubMed

    Montagne, Barbara; Kessels, Roy P C; Wester, Arie J; de Haan, Edward H F

    2006-07-01

    Interpersonal contacts depend to a large extent on understanding emotional facial expressions of others. Several neurological conditions may affect proficiency in emotional expression recognition. It has been shown that chronic alcoholics are impaired in labelling emotional expressions. More specifically, they mislabel sad expressions, regarding them as more hostile. Surprisingly, there has been relatively little research on patients with Korsakoff's syndrome as a result of chronic alcohol abuse. The current study investigated 23 patients diagnosed with Korsakoff's syndrome compared to 23 matched control participants. This study is the first to make use of a newly developed sensitive paradigm to measure emotion recognition for several emotions (anger, disgust, fear, happiness, sadness and surprise). The results show that patients with Korsakoff's syndrome are impaired at recognizing angry, fearful and surprised facial emotional expressions. These deficits might be due to the reported sub-cortical brain dysfunction in Korsakoff's syndrome.

  10. Language and affective facial expression in children with perinatal stroke

    PubMed Central

    Lai, Philip T.; Reilly, Judy S.

    2015-01-01

    Children with perinatal stroke (PS) provide a unique opportunity to understand developing brain-behavior relations. Previous research has noted distinctive differences in behavioral sequelae between children with PS and adults with acquired stroke: children fare better, presumably due to the plasticity of the developing brain for adaptive reorganization. Whereas we are beginning to understand language development, we know little about another communicative domain, emotional expression. The current study investigates the use and integration of language and facial expression during an interview. As anticipated, the language performance of the five and six year old PS group is comparable to their typically developing (TD) peers, however, their affective profiles are distinctive: those with right hemisphere injury are less expressive with respect to affective language and affective facial expression than either those with left hemisphere injury or TD group. The two distinctive profiles for language and emotional expression in these children suggest gradients of neuroplasticity in the developing brain. PMID:26117314

  11. Detection of Deception in Adults and Children via Facial Expressions.

    ERIC Educational Resources Information Center

    Feldman, Robert S.; And Others

    1979-01-01

    Examines the effect of age of encoder (first graders, seventh graders, and college students) on the decoding of nonverbal facial expressions indicative of verbal deception. Results showed the ratings of untrained, naive adult judges to be more accurate in decoding the first-grade stimulus persons than the older ones. (JMB)

  12. Perceived Bias in the Facial Expressions of Television News Broadcasters.

    ERIC Educational Resources Information Center

    Friedman, Howard S.; And Others

    1980-01-01

    Studied the nuances of perceived media bias by examining the television reporting of the 1976 Presidential election campaign by comparing the adjudged positivity of the facial expressions of network anchorpersons as they named or referred to either of the two candidates. (JMF)

  13. Teachers' Perception Regarding Facial Expressions as an Effective Teaching Tool

    ERIC Educational Resources Information Center

    Butt, Muhammad Naeem; Iqbal, Mohammad

    2011-01-01

    The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…

  14. Categorical Representation of Facial Expressions in the Infant Brain

    ERIC Educational Resources Information Center

    Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.

    2009-01-01

    Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…

  15. Categorical Perception of Emotional Facial Expressions in Preschoolers

    ERIC Educational Resources Information Center

    Cheal, Jenna L.; Rutherford, M. D.

    2011-01-01

    Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…

  16. Facial Expressions in Context: Contributions to Infant Emotion Theory.

    ERIC Educational Resources Information Center

    Camras, Linda A.

    To make the point that infant emotions are more dynamic than suggested by Differential Emotions Theory, which maintains that infants show the same prototypical facial expressions for emotions as adults do, this paper explores two questions: (1) when infants experience an emotion, do they always show the corresponding prototypical facial…

  17. Specificity of Facial Expression Labeling Deficits in Childhood Psychopathology

    ERIC Educational Resources Information Center

    Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2007-01-01

    Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…

  18. Projecting 2D gene expression data into 3D and 4D space.

    PubMed

    Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D

    2007-04-01

    Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.

  19. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Data Analysis and Visualization and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  20. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    ERIC Educational Resources Information Center

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  1. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    ERIC Educational Resources Information Center

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  2. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  3. The Enfacement Illusion Is Not Affected by Negative Facial Expressions

    PubMed Central

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one’s own face to assimilate another person’s face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer’s motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant’s own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other’s face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing. PMID:26291532

  4. The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

    PubMed

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

  5. Drug effects on responses to emotional facial expressions: recent findings

    PubMed Central

    Miller, Melissa A.; Bershad, Anya K.; de Wit, Harriet

    2016-01-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally. PMID:26226144

  6. Drug effects on responses to emotional facial expressions: recent findings.

    PubMed

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally.

  7. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  8. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  9. Facial expressions and the evolution of the speech rhythm.

    PubMed

    Ghazanfar, Asif A; Takahashi, Daniel Y

    2014-06-01

    In primates, different vocalizations are produced, at least in part, by making different facial expressions. Not surprisingly, humans, apes, and monkeys all recognize the correspondence between vocalizations and the facial postures associated with them. However, one major dissimilarity between monkey vocalizations and human speech is that, in the latter, the acoustic output and associated movements of the mouth are both rhythmic (in the 3- to 8-Hz range) and tightly correlated, whereas monkey vocalizations have a similar acoustic rhythmicity but lack the concommitant rhythmic facial motion. This raises the question of how we evolved from a presumptive ancestral acoustic-only vocal rhythm to the one that is audiovisual with improved perceptual sensitivity. According to one hypothesis, this bisensory speech rhythm evolved through the rhythmic facial expressions of ancestral primates. If this hypothesis has any validity, we expect that the extant nonhuman primates produce at least some facial expressions with a speech-like rhythm in the 3- to 8-Hz frequency range. Lip smacking, an affiliative signal observed in many genera of primates, satisfies this criterion. We review a series of studies using developmental, x-ray cineradiographic, EMG, and perceptual approaches with macaque monkeys producing lip smacks to further investigate this hypothesis. We then explore its putative neural basis and remark on important differences between lip smacking and speech production. Overall, the data support the hypothesis that lip smacking may have been an ancestral expression that was linked to vocal output to produce the original rhythmic audiovisual speech-like utterances in the human lineage.

  10. Facial Expressions and the Evolution of the Speech Rhythm

    PubMed Central

    Ghazanfar, Asif A.; Takahashi, Daniel Y.

    2015-01-01

    In primates, different vocalizations are produced, at least in part, by making different facial expressions. Not surprisingly, humans, apes, and monkeys all recognize the correspondence between vocalizations and the facial postures associated with them. However, one major dissimilarity between monkey vocalizations and human speech is that, in the latter, the acoustic output and associated movements of the mouth are both rhythmic (in the 3- to 8-Hz range) and tightly correlated, whereas monkey vocalizations have a similar acoustic rhythmicity but lack the concommitant rhythmic facial motion. This raises the question of how we evolved from a presumptive ancestral acoustic-only vocal rhythm to the one that is audiovisual with improved perceptual sensitivity. According to one hypothesis, this bisensory speech rhythm evolved through the rhythmic facial expressions of ancestral primates. If this hypothesis has any validity, we expect that the extant nonhuman primates produce at least some facial expressions with a speech-like rhythm in the 3- to 8-Hz frequency range. Lip smacking, an affiliative signal observed in many genera of primates, satisfies this criterion. We review a series of studies using developmental, x-ray cineradiographic, EMG, and perceptual approaches with macaque monkeys producing lip smacks to further investigate this hypothesis. We then explore its putative neural basis and remark on important differences between lip smacking and speech production. Overall, the data support the hypothesis that lip smacking may have been an ancestral expression that was linked to vocal output to produce the original rhythmic audiovisual speech-like utterances in the human lineage. PMID:24456390

  11. Facial expressions of emotion are not culturally universal.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  12. Facial expressions of emotion are not culturally universal

    PubMed Central

    Jack, Rachael E.; Garrod, Oliver G. B.; Yu, Hui; Caldara, Roberto; Schyns, Philippe G.

    2012-01-01

    Since Darwin’s seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843–850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind’s eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature–nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars. PMID:22509011

  13. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  14. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  15. Emotional Representation in Facial Expression and Script: A Comparison between Normal and Autistic Children

    ERIC Educational Resources Information Center

    Balconi, Michela; Carrera, Alba

    2007-01-01

    The paper explored conceptual and lexical skills with regard to emotional correlates of facial stimuli and scripts. In two different experimental phases normal and autistic children observed six facial expressions of emotions (happiness, anger, fear, sadness, surprise, and disgust) and six emotional scripts (contextualized facial expressions). In…

  16. Neural processing of dynamic emotional facial expressions in psychopaths

    PubMed Central

    Decety, Jean; Skelly, Laurie; Yoder, Keith J.; Kiehl, Kent A.

    2014-01-01

    Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, eighty adult incarcerated males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent fMRI scanning while viewing dynamic facial expressions of fear, sadness, happiness and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, STS) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex, regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488

  17. Neural processing of dynamic emotional facial expressions in psychopaths.

    PubMed

    Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A

    2014-02-01

    Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions.

  18. Younger and Older Users' Recognition of Virtual Agent Facial Expressions.

    PubMed

    Beer, Jenay M; Smarr, Cory-Ann; Fisk, Arthur D; Rogers, Wendy A

    2015-03-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent's social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults' ability to label a virtual agent's facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a possible

  19. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    PubMed Central

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  20. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    PubMed

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.

  1. Common impairments of emotional facial expression recognition in schizophrenia across French and Japanese cultures.

    PubMed

    Okada, Takashi; Kubota, Yasutaka; Sato, Wataru; Murai, Toshiya; Pellion, Fréderic; Gorog, Françoise

    2015-01-01

    To address whether the recognition of emotional facial expressions is impaired in schizophrenia across different cultures, patients with schizophrenia and age-matched normal controls in France and Japan were tested with a labeling task of emotional facial expressions and a matching task of unfamiliar faces. Schizophrenia patients in both France and Japan were less accurate in labeling fearful facial expressions. There was no correlation between the scores of facial emotion labeling and face matching. These results suggest that the impaired recognition of emotional facial expressions in schizophrenia is common across different cultures.

  2. Common impairments of emotional facial expression recognition in schizophrenia across French and Japanese cultures

    PubMed Central

    Okada, Takashi; Kubota, Yasutaka; Sato, Wataru; Murai, Toshiya; Pellion, Fréderic; Gorog, Françoise

    2015-01-01

    To address whether the recognition of emotional facial expressions is impaired in schizophrenia across different cultures, patients with schizophrenia and age-matched normal controls in France and Japan were tested with a labeling task of emotional facial expressions and a matching task of unfamiliar faces. Schizophrenia patients in both France and Japan were less accurate in labeling fearful facial expressions. There was no correlation between the scores of facial emotion labeling and face matching. These results suggest that the impaired recognition of emotional facial expressions in schizophrenia is common across different cultures. PMID:26257678

  3. Making faces: posed facial expression, self-competence, and personality.

    PubMed

    Browne, B A

    1994-03-01

    This study examined the relationships between posed facial expression, childrens' perceived self-competence, and teachers' perceptions of competence. Third- and fifth-grade children completed the Self-Perception Profile for Children, the Eysenck Personality Questionnaire, and the Junior Self-Monitoring Scale for Children. Individual differences in posing accuracy were determined with a videotaped acting task. Children who were more able to produce prototypical expressions obtained higher teacher ratings of academic competence; however, posing ability bore little relationship to children's self-competence. Gender differences in feelings of self-competence, but not in sending ability, were observed. Extraversion and self-monitoring were unrelated to ability to pose emotional expressions.

  4. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    PubMed

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face.

  5. Modulation of incentivized dishonesty by disgust facial expressions

    PubMed Central

    Lim, Julian; Ho, Paul M.; Mullette-Gillman, O'Dhaniel A.

    2015-01-01

    Disgust modulates moral decisions involving harming others. We recently specified that this effect is bi-directionally modulated by individual sensitivity to disgust. Here, we show that this effect generalizes to the moral domain of honesty and extends to outcomes with real-world impact. We employed a dice-rolling task in which participants were incentivized to dishonestly report outcomes to increase their potential final monetary payoff. Disgust or control facial expressions were presented subliminally on each trial. Our results reveal that the disgust facial expressions altered honest reporting as a bi-directional function moderated by individual sensitivity. Combining these data with those from prior experiments revealed that the effect of disgust presentation on both harm judgments and honesty could be accounted for by the same bidirectional function, with no significant effect of domain. This clearly demonstrates that disgust facial expressions produce the same modulation of moral judgments across different moral foundations (harm and honesty). Our results suggest strong overlap in the cognitive/neural processes of moral judgments across moral foundations, and provide a framework for further studies to specify the integration of emotional information in moral decision making. PMID:26257599

  6. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates.

    PubMed

    Santana, Sharlene E; Dobson, Seth D; Diogo, Rui

    2014-05-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution.

  7. Facial expression recognition based on improved DAGSVM

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Cui, Ye; Zhang, Yi

    2014-11-01

    For the cumulative error problem because of randomization sequence of traditional DAGSVM(Directed Acyclic Graph Support Vector Machine) classification, this paper presents an improved DAGSVM expression recognition method. The method uses the distance of class and the standard deviation as the measure of the classer, which minimize the error rate of the upper structure of the classification. At the same time, this paper uses the method which combines discrete cosine transform (Discrete Cosine Transform, DCT) with Local Binary Pattern(Local Binary Pattern - LBP) ,to extract expression feature and be the input to improve the DAGSVM classifier for recognition. Experimental results show that compared with other multi-class support vector machine method, improved DAGSVM classifier can achieve higher recognition rate. And when it's used at the platform of the intelligent wheelchair, experiments show that the method has a better robustness.

  8. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  9. Active AU Based Patch Weighting for Facial Expression Recognition

    PubMed Central

    Xie, Weicheng; Shen, Linlin; Yang, Meng; Lai, Zhihui

    2017-01-01

    Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed. PMID:28146094

  10. Body Actions Change the Appearance of Facial Expressions

    PubMed Central

    Fantoni, Carlo; Gerbino, Walter

    2014-01-01

    Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant’s global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience. PMID:25251882

  11. Role of bioactive 3D hybrid fibrous scaffolds on mechanical behavior and spatiotemporal osteoblast gene expression.

    PubMed

    Allo, Bedilu A; Lin, Shigang; Mequanint, Kibret; Rizkalla, Amin S

    2013-08-14

    Three-dimensional (3D) bioactive organic-inorganic (O/I) hybrid fibrous scaffolds are attractive extracellular matrix (ECM) surrogates for bone tissue engineering. With the aim of regulating osteoblast gene expression in 3D, a new class of hybrid fibrous scaffolds with two distinct fiber diameters (260 and 600 nm) and excellent physico-mechanical properties were fabricated from tertiary (SiO2-CaO-P2O5) bioactive glass (BG) and poly (ε-caprolactone) (PCL) by in situ sol-gel and electrospinning process. The PCL/BG hybrid fibrous scaffolds exhibited accelerated wetting properties, enhanced pore sizes and porosity, and superior mechanical properties that were dependent on fiber diameter. Contrary to control PCL fibrous scaffolds that were devoid of bonelike apatite particles, incubating PCL/BG hybrid fibrous scaffolds in simulated body fluid (SBF) revealed bonelike apatite deposition. Osteoblast cells cultured on PCL/BG hybrid fibrous scaffolds spread with multiple attachments and actively proliferated suggesting that the low temperature in situ sol-gel and electrospinning process did not have a detrimental effect. Targeted bone-associated gene expressions by rat calvarial osteoblasts seeded on these hybrid scaffolds demonstrated remarkable spatiotemporal gene activation. Transcriptional-level gene expressions for alkaline phosphatase (ALP), osteopontin (OPN), bone sialoprotein (BSP), and osteocalcin (OCN) were significantly higher on the hybrid fibrous scaffolds (p < 0.001) that were largely dependent on fiber diameter compared. Taken together, our results suggest that PCL/BG fibrous scaffolds may accelerate bone formation by providing a favorable microenvironment.

  12. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  13. Lateralization for dynamic facial expressions in human superior temporal sulcus.

    PubMed

    De Winter, François-Laurent; Zhu, Qi; Van den Stock, Jan; Nelissen, Koen; Peeters, Ronald; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2015-02-01

    Most face processing studies in humans show stronger activation in the right compared to the left hemisphere. Evidence is largely based on studies with static stimuli focusing on the fusiform face area (FFA). Hence, the pattern of lateralization for dynamic faces is less clear. Furthermore, it is unclear whether this property is common to human and non-human primates due to predisposing processing strategies in the right hemisphere or that alternatively left sided specialization for language in humans could be the driving force behind this phenomenon. We aimed to address both issues by studying lateralization for dynamic facial expressions in monkeys and humans. Therefore, we conducted an event-related fMRI experiment in three macaques and twenty right handed humans. We presented human and monkey dynamic facial expressions (chewing and fear) as well as scrambled versions to both species. We studied lateralization in independently defined face-responsive and face-selective regions by calculating a weighted lateralization index (LIwm) using a bootstrapping method. In order to examine if lateralization in humans is related to language, we performed a separate fMRI experiment in ten human volunteers including a 'speech' expression (one syllable non-word) and its scrambled version. Both within face-responsive and selective regions, we found consistent lateralization for dynamic faces (chewing and fear) versus scrambled versions in the right human posterior superior temporal sulcus (pSTS), but not in FFA nor in ventral temporal cortex. Conversely, in monkeys no consistent pattern of lateralization for dynamic facial expressions was observed. Finally, LIwms based on the contrast between different types of dynamic facial expressions (relative to scrambled versions) revealed left-sided lateralization in human pSTS for speech-related expressions compared to chewing and emotional expressions. To conclude, we found consistent laterality effects in human posterior STS but not

  14. The effect of sad facial expressions on weight judgment

    PubMed Central

    Weston, Trent D.; Hass, Norah C.; Lim, Seung-Lark

    2015-01-01

    Although the body weight evaluation (e.g., normal or overweight) of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes toward obese persons. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged. PMID:25914669

  15. Facial Expression Training Optimises Viewing Strategy in Children and Adults

    PubMed Central

    Pollux, Petra M. J.; Hall, Sophie; Guo, Kun

    2014-01-01

    This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions) and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions. PMID:25144680

  16. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.

  17. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    NASA Astrophysics Data System (ADS)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  18. Fear Modulates Visual Awareness Similarly for Facial and Bodily Expressions

    PubMed Central

    Stienen, Bernard M. C.; de Gelder, Beatrice

    2011-01-01

    Background: Social interaction depends on a multitude of signals carrying information about the emotional state of others. But the relative importance of facial and bodily signals is still poorly understood. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. In order to better understand the relative contribution of affective signals from the face only or from the whole body we performed two experiments using binocular rivalry. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. Method: In the first experiment we directly contrasted fearful, angry, and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 s and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies. Results: Taken together the two experiments show that there is no clear bias toward either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear. PMID:22125517

  19. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease

    PubMed Central

    Livingstone, Steven R.; Vezer, Esztella; McGarry, Lucy M.; Lang, Anthony E.; Russo, Frank A.

    2016-01-01

    Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] −0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI −0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = –0.45, p = 0.02, two-tailed). Conclusions: Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the “masked face” syndrome of PD. PMID:27375505

  20. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits

    PubMed Central

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081

  1. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-07-02

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.

  2. Shaped 3D singular spectrum analysis for quantifying gene expression, with application to the early zebrafish embryo.

    PubMed

    Shlemov, Alex; Golyandina, Nina; Holloway, David; Spirov, Alexander

    2015-01-01

    Recent progress in microscopy technologies, biological markers, and automated processing methods is making possible the development of gene expression atlases at cellular-level resolution over whole embryos. Raw data on gene expression is usually very noisy. This noise comes from both experimental (technical/methodological) and true biological sources (from stochastic biochemical processes). In addition, the cells or nuclei being imaged are irregularly arranged in 3D space. This makes the processing, extraction, and study of expression signals and intrinsic biological noise a serious challenge for 3D data, requiring new computational approaches. Here, we present a new approach for studying gene expression in nuclei located in a thick layer around a spherical surface. The method includes depth equalization on the sphere, flattening, interpolation to a regular grid, pattern extraction by Shaped 3D singular spectrum analysis (SSA), and interpolation back to original nuclear positions. The approach is demonstrated on several examples of gene expression in the zebrafish egg (a model system in vertebrate development). The method is tested on several different data geometries (e.g., nuclear positions) and different forms of gene expression patterns. Fully 3D datasets for developmental gene expression are becoming increasingly available; we discuss the prospects of applying 3D-SSA to data processing and analysis in this growing field.

  3. A longitudinal analysis of the development of infant facial expressions in response to acute pain: immediate and regulatory expressions.

    PubMed

    Ahola Kohut, Sara; Pillai Riddell, Rebecca; Flora, David B; Oster, Harriet

    2012-12-01

    Facial expressions during infancy are important to examine, as infants do not have the language skills to describe their experiences. This is particularly vital in the context of pain, where infants depend solely on their caregivers for relief. The objective of the current study was to investigate the development of negative infant facial expressions in response to immunization pain over the first year of life. Infant facial expressions were examined longitudinally using a subsample of 100 infants that were each videotaped during their 2-, 4-, 6-, and 12-month routine immunization appointments. Infant facial expressions were coded using BabyFACS (facial action coding system) for the first minute after a painful needle prick. Facial expressions were examined with a catalogue of the most commonly occurring facial expressions. Results demonstrated that clear differences were seen over ages. Infants display a variety of facial expressions with some of the components of adult pain expressions immediately after the needle and they abate shortly after. However, infants did not display adult expressions of discrete negative emotions. Instead, infants displayed a variety of generalized pain and distress faces aimed at gaining caregiver aid. The development of nonverbal communication in infants, particularly facial expressions, remains an important area of inquiry. Further study into accurately measuring infant negative emotions, pain, and distress is warranted.

  4. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    PubMed

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  5. Fetal facial expression in response to intravaginal music emission

    PubMed Central

    García-Faura, Álex; Prats-Galino, Alberto

    2015-01-01

    This study compared fetal response to musical stimuli applied intravaginally (intravaginal music [IVM]) with application via emitters placed on the mother’s abdomen (abdominal music [ABM]). Responses were quantified by recording facial movements identified on 3D/4D ultrasound. One hundred and six normal pregnancies between 14 and 39 weeks of gestation were randomized to 3D/4D ultrasound with: (a) ABM with standard headphones (flute monody at 98.6 dB); (b) IVM with a specially designed device emitting the same monody at 53.7 dB; or (c) intravaginal vibration (IVV; 125 Hz) at 68 dB with the same device. Facial movements were quantified at baseline, during stimulation, and for 5 minutes after stimulation was discontinued. In fetuses at a gestational age of >16 weeks, IVM-elicited mouthing (MT) and tongue expulsion (TE) in 86.7% and 46.6% of fetuses, respectively, with significant differences when compared with ABM and IVV (p = 0.002 and p = 0.004, respectively). There were no changes from baseline in ABM and IVV. TE occurred ≥5 times in 5 minutes in 13.3% with IVM. IVM was related with higher occurrence of MT (odds ratio = 10.980; 95% confidence interval = 3.105–47.546) and TE (odds ratio = 10.943; 95% confidence interval = 2.568–77.037). The frequency of TE with IVM increased significantly with gestational age (p = 0.024). Fetuses at 16–39 weeks of gestation respond to intravaginally emitted music with repetitive MT and TE movements not observed with ABM or IVV. Our findings suggest that neural pathways participating in the auditory–motor system are developed as early as gestational week 16. These findings might contribute to diagnostic methods for prenatal hearing screening, and research into fetal neurological stimulation. PMID:26539240

  6. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  7. Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders.

    PubMed

    Brewer, Rebecca; Biotti, Federica; Catmur, Caroline; Press, Clare; Happé, Francesca; Cook, Richard; Bird, Geoffrey

    2016-02-01

    The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants' ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD.

  8. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.

  9. The CD8alpha from sea bass (Dicentrarchus labrax L.): Cloning, expression and 3D modelling.

    PubMed

    Buonocore, Francesco; Randelli, Elisa; Bird, Steve; Secombes, Chris J; Costantini, Susan; Facchiano, Angelo; Mazzini, Massimo; Scapigliati, Giuseppe

    2006-04-01

    In this paper we describe the cloning, expression and structural study by modelling techniques of the CD8alpha from sea bass (Dicentrarchus labrax L.). The sea bass CD8alpha cDNA is comprised of 1490 bp and is translated in one reading frame to give a protein of 217 amino acids, with a predicted 26 amino acids signal peptide, a 88 bp 5'-UTR and a 748 bp 3'-UTR. A multiple alignment of CD8alpha from sea bass with other known CD8alpha sequences shows the conservation of most amino acid residues involved in the peculiar structural domains found within CD8alpha's. Cysteine residues that are involved in disulfide bonding to form the V domain are conserved. In contrast, an extra cysteine residue found in most mammals in this region is not present in sea bass. The transmembrane and cytoplasmic regions are the most conserved regions within the molecule in the alignment analysis. However, the motif (CXCP) that is thought to be responsible for binding p56lck is missing in the sea bass sequence. Phylogenetic analysis conducted using amino acid sequences showed that sea bass CD8alpha grouped with other known teleost sequences and that three different clusters were formed by the mammalian, avian and fish CD8alpha sequences. The thymus was the tissue with the highest CD8alpha expression, followed by gut, gills, peripheral blood leukocytes and spleen. Lower CD8alpha mRNA levels were found in head kidney, liver and brain. It was possible to create a partial 3D model using the human and mouse structures as template. The CD8alpha 11-120 amino acid region was taken into consideration and the best obtained 3D model shows the presence of ten beta-strands, involving about 50% of the sequence. The global structure was defined as an immunoglobulin-like beta-sandwich made of two anti-parallel sheets. Two cysteines were present in this region and they were at a suitable distance to form an S-S bond as seen in the template human and mouse structures.

  10. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    PubMed

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-03-13

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning.

  11. Can an anger face also be scared? Malleability of facial expressions.

    PubMed

    Widen, Sherri C; Naab, Pamela

    2012-10-01

    Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking.

  12. Cloning, Expression and 3D Structure Prediction of Chitinase from Chitinolyticbacter meiyuanensis SYBC-H1

    PubMed Central

    Hao, Zhikui; Wu, Hangui; Yang, Meiling; Chen, Jianjun; Xi, Limin; Zhao, Weijie; Yu, Jialin; Liu, Jiayang; Liao, Xiangru; Huang, Qingguo

    2016-01-01

    Two CHI genes from Chitinolyticbacter meiyuanensis SYBC-H1 encoding chitinases were identified and their protein 3D structures were predicted. According to the amino acid sequence alignment, CHI1 gene encoding 166 aa had a structural domain similar to the GH18 type II chitinase, and CHI2 gene encoding 383 aa had the same catalytic domain as the glycoside hydrolase family 19 chitinase. In this study, CHI2 chitinase were expressed in Escherichia coli BL21 cells, and this protein was purified by ammonium sulfate precipitation, DEAE-cellulose, and Sephadex G-100 chromatography. Optimal activity of CHI2 chitinase occurred at a temperature of 40 °C and a pH of 6.5. The presence of metal ions Fe3+, Fe2+, and Zn2+ inhibited CHI2 chitinase activity, while Na+ and K+ promoted its activity. Furthermore, the presence of EGTA, EDTA, and β-mercaptoethanol significantly increased the stability of CHI2 chitinase. The CHI2 chitinase was active with p-NP-GlcNAc, with the Km and Vm values of 23.0 µmol/L and 9.1 mM/min at a temperature of 37 °C, respectively. Additionally, the CHI2 chitinase was characterized as an N-acetyl glucosaminidase based on the hydrolysate from chitin. Overall, our results demonstrated CHI2 chitinase with remarkable biochemical properties is suitable for bioconversion of chitin waste. PMID:27240345

  13. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  14. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  15. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  16. The Relationship between Processing Facial Identity and Emotional Expression in 8-Month-Old Infants

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Jovanovic, Bianca

    2010-01-01

    In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight-month-old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a…

  17. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  18. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    ERIC Educational Resources Information Center

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  19. Evaluating Posed and Evoked Facial Expressions of Emotion from Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Faso, Daniel J.; Sasson, Noah J.; Pinkham, Amy E.

    2015-01-01

    Though many studies have examined facial affect perception by individuals with autism spectrum disorder (ASD), little research has investigated how facial expressivity in ASD is perceived by others. Here, naïve female observers (n = 38) judged the intensity, naturalness and emotional category of expressions produced by adults with ASD (n = 6) and…

  20. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    ERIC Educational Resources Information Center

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  1. Compound facial expressions of emotion: from basic research to clinical applications.

    PubMed

    Du, Shichuan; Martinez, Aleix M

    2015-12-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.

  2. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  3. Can Healthy Fetuses Show Facial Expressions of “Pain” or “Distress”?

    PubMed Central

    Reissland, Nadja; Francis, Brian; Mason, James

    2013-01-01

    Background With advances of research on fetal behavioural development, the question of whether we can identify fetal facial expressions and determine their developmental progression, takes on greater importance. In this study we investigate longitudinally the increasing complexity of combinations of facial movements from 24 to 36 weeks gestation in a sample of healthy fetuses using frame-by-frame coding of 4-D ultrasound scans. The primary aim was to examine whether these complex facial movements coalesce into a recognisable facial expression of pain/distress. Methodology/Findings Fifteen fetuses (8 girls, 7 boys) were observed four times in the second and third trimester of pregnancy. Fetuses showed significant progress towards more complex facial expressions as gestational age increased. Statistical analysis of the facial movements making up a specific facial configuration namely “pain/distress” also demonstrates that this facial expression becomes significantly more complete as the fetus matures. Conclusions/Significance The study shows that one can determine the normal progression of fetal facial movements. Furthermore, our results suggest that healthy fetuses progress towards an increasingly complete pain/distress expression as they mature. We argue that this is an adaptive process which is beneficial to the fetus postnatally and has the potential to identify normal versus abnormal developmental pathways. PMID:23755245

  4. Selective attention and facial expression recognition in patients with Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Serrano, Juan M; Martín, Pilar

    2014-06-01

    Parkinson's disease (PD) has been associated with facial expression recognition difficulties. However, this impairment could be secondary to the one produced in other cognitive processes involved in recognition, such as selective attention. This study investigates the influence of two selective attention components (inhibition and visual search) on facial expression recognition in PD. We compared facial expression and non-emotional stimuli recognition abilities of 51 patients and 51 healthy controls, by means of an adapted Stroop task, and by "The Face in the Crowd" paradigm, which assess Inhibition and Visual Search abilities, respectively. Patients scored worse than controls in both tasks with facial expressions, but not with the other nonemotional stimuli, indicating specific emotional recognition impairment, not dependent on selective attention abilities. This should be taken into account in patients' neuropsychological assessment given the relevance of emotional facial expression for social communication in everyday settings.

  5. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    PubMed

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  6. Modulation of perception and brain activity by predictable trajectories of facial expressions.

    PubMed

    Furl, N; van Rijsbergen, N J; Kiebel, S J; Friston, K J; Treves, A; Dolan, R J

    2010-03-01

    People track facial expression dynamics with ease to accurately perceive distinct emotions. Although the superior temporal sulcus (STS) appears to possess mechanisms for perceiving changeable facial attributes such as expressions, the nature of the underlying neural computations is not known. Motivated by novel theoretical accounts, we hypothesized that visual and motor areas represent expressions as anticipated motion trajectories. Using magnetoencephalography, we show predictable transitions between fearful and neutral expressions (compared with scrambled and static presentations) heighten activity in visual cortex as quickly as 165 ms poststimulus onset and later (237 ms) engage fusiform gyrus, STS and premotor areas. Consistent with proposed models of biological motion representation, we suggest that visual areas predictively represent coherent facial trajectories. We show that such representations bias emotion perception of subsequent static faces, suggesting that facial movements elicit predictions that bias perception. Our findings reveal critical processes evoked in the perception of dynamic stimuli such as facial expressions, which can endow perception with temporal continuity.

  7. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  8. Effects of Intensity of Facial Expressions on Amygdalar Activation Independently of Valence.

    PubMed

    Lin, Huiyan; Mueller-Bardorff, Miriam; Mothes-Lasch, Martin; Buff, Christine; Brinkmann, Leonie; Miltner, Wolfgang H R; Straube, Thomas

    2016-01-01

    For several stimulus categories (e.g., pictures, odors, and words), the arousal of both negative and positive stimuli has been shown to modulate amygdalar activation. In contrast, previous studies did not observe similar amygdalar effects in response to negative and positive facial expressions with varying intensity of facial expressions. Reasons for this discrepancy may be related to analytical strategies, experimental design and stimuli. Therefore, the present study aimed at re-investigating whether the intensity of facial expressions modulates amygdalar activation by circumventing limitations of previous research. Event-related functional magnetic resonance imaging was used to assess brain activation while participants observed a static neutral expression and positive (happy) and negative (angry) expressions of either high or low intensity from an ecologically valid, novel stimulus set. The ratings of arousal and intensity were highly correlated. We found that amygdalar activation followed a u-shaped activation pattern with highest activation to high intense facial expressions as compared to low intensity facial expressions and to the neutral expression irrespective of valence, suggesting a critical role of the amygdala in valence-independent arousal processing of facial expressions. Additionally, consistent with previous studies, intensity effects were also found in visual areas and generally increased activation to angry versus happy faces were found in visual cortex and insula, indicating enhanced visual representations of high arousing facial expressions and increased visual and somatosensory representations of threat.

  9. Effects of Intensity of Facial Expressions on Amygdalar Activation Independently of Valence

    PubMed Central

    Lin, Huiyan; Mueller-Bardorff, Miriam; Mothes-Lasch, Martin; Buff, Christine; Brinkmann, Leonie; Miltner, Wolfgang H. R.; Straube, Thomas

    2016-01-01

    For several stimulus categories (e.g., pictures, odors, and words), the arousal of both negative and positive stimuli has been shown to modulate amygdalar activation. In contrast, previous studies did not observe similar amygdalar effects in response to negative and positive facial expressions with varying intensity of facial expressions. Reasons for this discrepancy may be related to analytical strategies, experimental design and stimuli. Therefore, the present study aimed at re-investigating whether the intensity of facial expressions modulates amygdalar activation by circumventing limitations of previous research. Event-related functional magnetic resonance imaging was used to assess brain activation while participants observed a static neutral expression and positive (happy) and negative (angry) expressions of either high or low intensity from an ecologically valid, novel stimulus set. The ratings of arousal and intensity were highly correlated. We found that amygdalar activation followed a u-shaped activation pattern with highest activation to high intense facial expressions as compared to low intensity facial expressions and to the neutral expression irrespective of valence, suggesting a critical role of the amygdala in valence-independent arousal processing of facial expressions. Additionally, consistent with previous studies, intensity effects were also found in visual areas and generally increased activation to angry versus happy faces were found in visual cortex and insula, indicating enhanced visual representations of high arousing facial expressions and increased visual and somatosensory representations of threat. PMID:28066216

  10. Paedomorphic facial expressions give dogs a selective advantage.

    PubMed

    Waller, Bridget M; Peirce, Kate; Caeiro, Cátia C; Scheider, Linda; Burrows, Anne M; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  11. Paedomorphic Facial Expressions Give Dogs a Selective Advantage

    PubMed Central

    Waller, Bridget M.; Peirce, Kate; Caeiro, Cátia C.; Scheider, Linda; Burrows, Anne M.; McCune, Sandra; Kaminski, Juliane

    2013-01-01

    How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process. PMID:24386109

  12. Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders

    PubMed Central

    Biotti, Federica; Catmur, Caroline; Press, Clare; Happé, Francesca; Cook, Richard; Bird, Geoffrey

    2015-01-01

    The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants’ ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD. Autism Res 2016, 9: 262–271. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. PMID:26053037

  13. Gamma-band activity reflects attentional guidance by facial expression.

    PubMed

    Müsch, Kathrin; Siegel, Markus; Engel, Andreas K; Schneider, Till R

    2017-02-01

    Facial expressions attract attention due to their motivational significance. Previous work focused on attentional biases towards threat-related, fearful faces, although healthy participants tend to avoid mild threat. Growing evidence suggests that neuronal gamma (>30Hz) and alpha-band activity (8-12Hz) play an important role in attentional selection, but it is unknown if such oscillatory activity is involved in the guidance of attention through facial expressions. Thus, in this magnetoencephalography (MEG) study we investigated whether attention is shifted towards or away from fearful faces and characterized the underlying neuronal activity in these frequency ranges in forty-four healthy volunteers. We employed a covert spatial attention task using neutral and fearful faces as task-irrelevant distractors and emotionally neutral Gabor patches as targets. Participants had to indicate the tilt direction of the target. Analysis of the neuronal data was restricted to the responses to target Gabor patches. We performed statistical analysis at the sensor level and used subsequent source reconstruction to localize the observed effects. Spatially selective attention effects in the alpha and gamma band were revealed in parieto-occipital regions. We observed an attentional cost of processing the face distractors, as reflected in lower task performance on targets with short stimulus onset asynchrony (SOA <150ms) between faces and targets. On the neuronal level, attentional orienting to face distractors led to enhanced gamma band activity in bilateral occipital and parietal regions, when fearful faces were presented in the same hemifield as targets, but only in short SOA trials. Our findings provide evidence that both top-down and bottom-up attentional biases are reflected in parieto-occipital gamma-band activity.

  14. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  15. High frequency of facial expressions corresponding to confusion, concentration, and worry in an analysis of naturally occurring facial expressions of Americans.

    PubMed

    Rozin, Paul; Cohen, Adam B

    2003-03-01

    College students were instructed to observe symmetric and asymmetric facial expressions and to report the target's judgment of the "emotion" she or he was expressing, the facial movements involved, and the more expressive side. For both asymmetric and symmetric expressions, some of the most common emotions or states reported are neither included in standard taxonomies of emotion nor studied as important signals. Confusion is the most common descriptor reported for asymmetric expressions and is commonly reported for symmetrical expressions as well. Other frequent descriptors were think-concentrate and worry. Confusion is characterized principally by facial movements around the eyes and has many properties usually attributed to emotions. There was no evidence for lateralization of positive versus negative valenced states.

  16. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  17. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  18. The amygdalo-motor pathways and the control of facial expressions

    PubMed Central

    Gothard, Katalin M.

    2013-01-01

    Facial expressions reflect decisions about the perceived meaning of social stimuli and the expected socio-emotional outcome of responding (or not) with a reciprocating expression. The decision to produce a facial expression emerges from the joint activity of a network of structures that include the amygdala and multiple, interconnected cortical and subcortical motor areas. Reciprocal transformations between these sensory and motor signals give rise to distinct brain states that promote, or impede the production of facial expressions. The muscles of the upper and lower face are controlled by anatomically distinct motor areas. Facial expressions engage to a different extent the lower and upper face and thus require distinct patterns of neural activity distributed across multiple facial motor areas in ventrolateral frontal cortex, the supplementary motor area, and two areas in the midcingulate cortex. The distributed nature of the decision manifests in the joint activation of multiple motor areas that initiate the production of facial expression. Concomitantly multiple areas, including the amygdala, monitor ongoing overt behaviors (the expression itself) and the covert, autonomic responses that accompany emotional expressions. As the production of facial expressions is brought into the framework of formal decision making, an important challenge will be to incorporate autonomic and visceral states into decisions that govern the receiving-emitting cycle of social signals. PMID:24678289

  19. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults

    PubMed Central

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415

  20. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    PubMed

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  1. The effect of emotional facial expressions on children's working memory: associations with age and behavior.

    PubMed

    Augusti, Else-Marie; Torheim, Hanna Karoline; Melinder, Annika

    2014-01-01

    Studies on adults have revealed a disadvantageous effect of negative emotional stimuli on executive functions (EF), and it is suggested that this effect is amplified in children. The present study's aim was to assess how emotional facial expressions affected working memory in 9- to 12-year-olds, using a working memory task with emotional facial expressions as stimuli. Additionally, we explored how degree of internalizing and externalizing symptoms in typically developing children was related to performance on the same task. Before employing the working memory task with emotional facial expressions as stimuli, an independent sample of 9- to 12-year-olds was asked to recognize the facial expressions intended to serve as stimuli for the working memory task and to rate the facial expressions on the degree to which the emotion was expressed and for arousal to obtain a baseline for how children during this age recognize and react to facial expressions. The first study revealed that children rated the facial expressions with similar intensity and arousal across age. When employing the working memory task with facial expressions, results revealed that negatively valenced expressions impaired working memory more than neutral and positively valenced expressions. The ability to successfully complete the working memory task increased between 9 to 12 years of age. Children's total problems were associated with poorer performance on the working memory task with facial expressions. Results on the effect of emotion on working memory are discussed in light of recent models and empirical findings on how emotional information might interact and interfere with cognitive processes such as working memory.

  2. A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.

    PubMed

    Humphreys, Kate; Avidan, Galia; Behrmann, Marlene

    2007-01-01

    Whether the ability to recognize facial expression can be preserved in the absence of the recognition of facial identity remains controversial. The current study reports the results of a detailed investigation of facial expression recognition in three congenital prosopagnosic (CP) participants, in comparison with two patients with acquired prosopagnosia (AP) and a large group of 30 neurologically normal participants, including individually age- and gender-matched controls. Participants completed a fine-grained expression recognition paradigm requiring a six-alternative forced-choice response to continua of morphs of six different basic facial expressions (e.g. happiness and surprise). Accuracy, sensitivity and reaction times were measured. The performance of all three CP individuals was indistinguishable from that of controls, even for the most subtle expressions. In contrast, both individuals with AP displayed pronounced difficulties with the majority of expressions. The results from the CP participants attest to the dissociability of the processing of facial identity and of facial expression. Whether this remarkably good expression recognition is achieved through normal, or compensatory, mechanisms remains to be determined. Either way, this normal level of performance does not extend to include facial identity.

  3. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions

    PubMed Central

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions. PMID:26578938

  4. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions.

    PubMed

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions.

  5. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  6. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  7. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    ERIC Educational Resources Information Center

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  8. Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-03-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face.

  9. [Emotional intelligence and oscillatory responses on the emotional facial expressions].

    PubMed

    Kniazev, G G; Mitrofanova, L G; Bocharov, A V

    2013-01-01

    Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women) in age 18-30 years. Participants were instructed to evaluate emotional expression (angry, happy and neutral) of each presented face on an analog scale ranging from -100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500-870 ms) event-related theta synchronization in high emotional intelligence subject was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon presentation of angry faces. This suggests the existence of a mechanism that can be selectively increase the positive emotions and reduce negative emotions.

  10. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.

  11. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  12. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  13. Comparative analysis of 3D expression patterns of transcription factor genes and digit fate maps in the developing chick wing.

    PubMed

    Fisher, Malcolm; Downie, Helen; Welten, Monique C M; Delgado, Irene; Bain, Andrew; Planzer, Thorsten; Sherman, Adrian; Sang, Helen; Tickle, Cheryll

    2011-04-22

    Hoxd13, Tbx2, Tbx3, Sall1 and Sall3 genes are candidates for encoding antero-posterior positional values in the developing chick wing and specifying digit identity. In order to build up a detailed profile of gene expression patterns in cell lineages that give rise to each of the digits over time, we compared 3 dimensional (3D) expression patterns of these genes during wing development and related them to digit fate maps. 3D gene expression data at stages 21, 24 and 27 spanning early bud to digital plate formation, captured from in situ hybridisation whole mounts using Optical Projection Tomography (OPT) were mapped to reference wing bud models. Grafts of wing bud tissue from GFP chicken embryos were used to fate map regions of the wing bud giving rise to each digit; 3D images of the grafts were captured using OPT and mapped on to the same models. Computational analysis of the combined computerised data revealed that Tbx2 and Tbx3 are expressed in digit 3 and 4 progenitors at all stages, consistent with encoding stable antero-posterior positional values established in the early bud; Hoxd13 and Sall1 expression is more dynamic, being associated with posterior digit 3 and 4 progenitors in the early bud but later becoming associated with anterior digit 2 progenitors in the digital plate. Sox9 expression in digit condensations lies within domains of digit progenitors defined by fate mapping; digit 3 condensations express Hoxd13 and Sall1, digit 4 condensations Hoxd13, Tbx3 and to a lesser extent Tbx2. Sall3 is only transiently expressed in digit 3 progenitors at stage 24 together with Sall1 and Hoxd13; then becomes excluded from the digital plate. These dynamic patterns of expression suggest that these genes may play different roles in digit identity either together or in combination at different stages including the digit condensation stage.

  14. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition

    PubMed Central

    Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans

    2015-01-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602

  15. The Effects of Early Institutionalization on the Discrimination of Facial Expressions of Emotion in Young Children.

    PubMed

    Jeon, Hana; Moulson, Margaret C; Fox, Nathan; Zeanah, Charles; Nelson, Charles A

    2010-03-01

    The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in 3 groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care as Usual (institutional care) following a baseline assessment. Another group consisted of children abandoned at birth who were randomly assigned to high-quality foster care following a baseline assessment. A third group consisted of never-institutionalized children who were reared by their biological parents. All children were familiarized to happy, sad, fearful, and neutral facial expressions and tested on their ability to discriminate familiar versus novel facial expressions. Contrary to our prediction, all three groups of children were equally able to discriminate among the different expressions. Furthermore, in contrast to findings at 13-30 months of age, these same children showed familiarity rather than novelty preferences toward different expressions. There were also asymmetries in children's discrimination of facial expressions depending on which facial expression served as the familiar versus the novel stimulus. Collectively, early institutionalization appears not to impact the development of the ability to discriminate facial expressions of emotion, at least when preferential looking serves as the dependent measure. These findings are discussed in the context of the myriad domains that are affected by early institutionalization.

  16. Automated decoding of facial expressions reveals marked differences in children when telling antisocial versus prosocial lies.

    PubMed

    Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang

    2016-10-01

    The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research.

  17. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    PubMed

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction.

  18. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face.

  19. How Do We Update Faces? Effects of Gaze Direction and Facial Expressions on Working Memory Updating

    PubMed Central

    Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola

    2012-01-01

    The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g., fear). Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g., joy-direct gaze) were compared to low binding conditions (e.g., joy-averted gaze). Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes. PMID:23060832

  20. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  1. Processing of Individual Items during Ensemble Coding of Facial Expressions

    PubMed Central

    Li, Huiyun; Ji, Luyan; Tong, Ke; Ren, Naixin; Chen, Wenfeng; Liu, Chang Hong; Fu, Xiaolan

    2016-01-01

    There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 ms for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing. PMID:27656154

  2. The relation of expression recognition and affective experience in facial expression processing: an event-related potential study

    PubMed Central

    Dong, Guangheng; Lu, Shenglan

    2010-01-01

    The present study investigates the relationship of expression recognition and affective experience during facial expression processing by event-related potentials (ERP). Facial expressions used in the present study can be divided into three categories: positive (happy), neutral (neutral), and negative (angry). Participants were asked to finish two kinds of facial recognition tasks: one was easy, and the other was difficult. In the easy task, significant main effects were found for different valence conditions, meaning that emotions were evoked effectively when participants recognized the expressions in facial expression processing. However, no difference was found in the difficult task, meaning that even if participants had identified the expressions correctly, no relevant emotion was evoked during the process. The findings suggest that emotional experience was not simultaneous with expression identification in facial expression processing, and the affective experience process could be suppressed in challenging cognitive tasks. The results indicate that we should pay attention to the level of cognitive load when using facial expressions as emotion-eliciting materials in emotion studies; otherwise, the emotion may not be evoked effectively. PMID:22110330

  3. Attention for emotional facial expressions in dysphoria: an eye-movement registration study.

    PubMed

    Leyman, Lemke; De Raedt, Rudi; Vaeyens, Roel; Philippaerts, Renaat M

    2011-01-01

    Former research demonstrated that depression is associated with dysfunctional attentional processing of emotional information. Most studies examined this bias by registration of response latencies. The present study employed an ecologically valid measurement of attentive processing, using eye-movement registration. Dysphoric and non-dysphoric participants viewed slides presenting sad, angry, happy and neutral facial expressions. For each type of expression, three components of visual attention were analysed: the relative fixation frequency, fixation time and glance duration. Attentional biases were also investigated for inverted facial expressions to ensure that they were not related to eye-catching facial features. Results indicated that non-dysphoric individuals were characterised by longer fixating and dwelling on happy faces. Dysphoric individuals demonstrated a longer dwelling on sad and neutral faces. These results were not found for inverted facial expressions. The present findings are in line with the assumption that depression is associated with a prolonged attentional elaboration on negative information.

  4. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    PubMed

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  5. Facial expression recognition in the wild based on multimodal texture features

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun

    2016-11-01

    Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

  6. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter.

  7. Synthesis of Facial Image with Expression Based on Muscular Contraction Parameters Using Linear Muscle and Sphincter Muscle

    NASA Astrophysics Data System (ADS)

    Ahn, Seonju; Ozawa, Shinji

    We aim to synthesize individual facial image with expression based on muscular contraction parameters. We have proposed a method of calculating the muscular contraction parameters from arbitrary face image without using learning for each individual. As a result, we could generate not only individual facial expression, but also the facial expressions of various persons. In this paper, we propose the muscle-based facial model; the facial muscles define both the linear and the novel sphincter. Additionally, we propose a method of synthesizing individual facial image with expression based on muscular contraction parameters. First, the individual facial model with expression is generated by fitting using the arbitrary face image. Next, the muscular contraction parameters are calculated that correspond to the expression displacement of the input face image. Finally, the facial expression is synthesized by the vertex displacements of a neutral facial model based on calculated muscular contraction parameters. Experimental results reveal that the novel sphincter muscle can synthesize facial expressions of the facial image, which corresponds to the actual face image with arbitrary and mouth or eyes expression.

  8. Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity.

    PubMed

    Tranel, D; Damasio, A R; Damasio, H

    1988-05-01

    We conducted a series of experiments to assess the ability to recognize the meaning of facial expressions, gender, and age in four patients with severe impairments of the recognition of facial identity. In three patients the recognition of face identity could be dissociated from that of facial expression, age, and gender. In one, all forms of face recognition were impaired. Thus, a given lesion may preclude one type of recognition but not another. We conclude that (1) the cognitive demands posed by different forms of recognition are met at different processing levels, and (2) different levels depend on different neural substrates.

  9. The role of context in interpreting facial expression: comment on Russell and Fehr (1987).

    PubMed

    Ekman, P; O'Sullivan, M

    1988-03-01

    In their article, "Relativity in the Perception of Emotion in Facial Expressions," Russell and Fehr (1987) argued that context is the principal determinant in interpreting facial expressions of emotion. They questioned the biological bases for emotion suggested by Darwin and supported by many cross-cultural studies. We suggest that their results occurred because the target faces they used were emotionally neutral or ambiguous. We also argue that their findings can be interpreted as supporting the communicative importance of the face.

  10. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  11. Mining 3D Patterns from Gene Expression Temporal Data: A New Tricluster Evaluation Measure

    PubMed Central

    2014-01-01

    Microarrays have revolutionized biotechnological research. The analysis of new data generated represents a computational challenge due to the characteristics of these data. Clustering techniques are applied to create groups of genes that exhibit a similar behavior. Biclustering emerges as a valuable tool for microarray data analysis since it relaxes the constraints for grouping, allowing genes to be evaluated only under a subset of the conditions. However, if a third dimension appears in the data, triclustering is the appropriate tool for the analysis. This occurs in longitudinal experiments in which the genes are evaluated under conditions at several time points. All clustering, biclustering, and triclustering techniques guide their search for solutions by a measure that evaluates the quality of clusters. We present an evaluation measure for triclusters called Mean Square Residue 3D. This measure is based on the classic biclustering measure Mean Square Residue. Mean Square Residue 3D has been applied to both synthetic and real data and it has proved to be capable of extracting groups of genes with homogeneous patterns in subsets of conditions and times, and these groups have shown a high correlation level and they are also related to their functional annotations extracted from the Gene Ontology project. PMID:25143987

  12. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    PubMed Central

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (p<0.001), and for single emotions sub-scores happiness, fear, anger, sadness (p<0.01) and surprise (p = 0.02). In the facial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all p<0.001). RT and the level of confidence showed significant differences between PD and HC for the same emotions. There was a significant positive correlation between the emotion facial recognition and

  13. Differential Expression of Wound Fibrotic Factors between Facial and Trunk Dermal Fibroblasts

    PubMed Central

    Kurita, Masakazu; Okazaki, Mutsumi; Kaminishi-Tanikawa, Akiko; Niikura, Mamoru; Takushima, Akihiko; Harii, Kiyonori

    2012-01-01

    Clinically, wounds on the face tend to heal with less scarring than those on the trunk, but the causes of this difference have not been clarified. Fibroblasts obtained from different parts of the body are known to show different properties. To investigate whether the characteristic properties of facial and trunk wound healing are caused by differences in local fibroblasts, we comparatively analyzed the functional properties of superficial and deep dermal fibroblasts obtained from the facial and trunk skin of seven individuals, with an emphasis on tendency for fibrosis. Proliferation kinetics and mRNA and protein expression of 11 fibrosis-associated factors were investigated. The proliferation kinetics of facial and trunk fibroblasts were identical, but the expression and production levels of profibrotic factors, such as extracellular matrix, transforming growth factor-β1, and connective tissue growth factor mRNA, were lower in facial fibroblasts when compared with trunk fibro-blasts, while the expression of antifibrotic factors, such as collagenase, basic fibroblast growth factor, and hepatocyte growth factor, showed no clear trends. The differences in functional properties of facial and trunk dermal fibroblasts were consistent with the clinical tendencies of healing of facial and trunk wounds. Thus, the differences between facial and trunk scarring are at least partly related to the intrinsic nature of the local dermal fibroblasts. PMID:22260504

  14. Differential expression of wound fibrotic factors between facial and trunk dermal fibroblasts.

    PubMed

    Kurita, Masakazu; Okazaki, Mutsumi; Kaminishi-Tanikawa, Akiko; Niikura, Mamoru; Takushima, Akihiko; Harii, Kiyonori

    2012-01-01

    Clinically, wounds on the face tend to heal with less scarring than those on the trunk, but the causes of this difference have not been clarified. Fibroblasts obtained from different parts of the body are known to show different properties. To investigate whether the characteristic properties of facial and trunk wound healing are caused by differences in local fibroblasts, we comparatively analyzed the functional properties of superficial and deep dermal fibroblasts obtained from the facial and trunk skin of seven individuals, with an emphasis on tendency for fibrosis. Proliferation kinetics and mRNA and protein expression of 11 fibrosis-associated factors were investigated. The proliferation kinetics of facial and trunk fibroblasts were identical, but the expression and production levels of profibrotic factors, such as extracellular matrix, transforming growth factor-β1, and connective tissue growth factor mRNA, were lower in facial fibroblasts when compared with trunk fibroblasts, while the expression of antifibrotic factors, such as collagenase, basic fibroblast growth factor, and hepatocyte growth factor, showed no clear trends. The differences in functional properties of facial and trunk dermal fibroblasts were consistent with the clinical tendencies of healing of facial and trunk wounds. Thus, the differences between facial and trunk scarring are at least partly related to the intrinsic nature of the local dermal fibroblasts.

  15. Development and validation of an Argentine set of facial expressions of emotion.

    PubMed

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  16. Does gaze direction modulate facial expression processing in children with autism spectrum disorder?

    PubMed

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent motivational tendency (i.e., an avoidant facial expression with averted eye gaze) than those with an incongruent motivational tendency. Children with ASD (9-14 years old; n = 14) were not affected by the gaze direction of facial stimuli. This finding was replicated in Experiment 2, which presented only the eye region of the face to typically developing children (n = 10) and children with ASD (n = 10). These results demonstrated that children with ASD do not encode and/or integrate multiple communicative signals based on their affective or motivational tendency.

  17. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns

    PubMed Central

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi

    2016-01-01

    Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  18. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    PubMed

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  19. Regional structural styles in the northeast Netherlands as expressed on 3-D data

    SciTech Connect

    Goeyenbier, H. )

    1993-09-01

    The northeast Netherlands areas is a highly prospective gas province, containing the Groningen gas field and a multitude of smaller fields. Some 40 three-dimensional (3-D) seismic surveys have been acquired over the last 10 yr. covering a major part of this 15,000-km[sup 2] area. These surveys have been combined for the first time on a Landmark workstation to produce time, depth, and horizon attribute maps from six important (overburden and reservoir) levels: base Tertiary, base Chalk, base Cretaceous, base Jurassic, top Zechstein and base Zechstein. The structural history was reconstructed by analyzing isopach maps of the various units in combination with dip extractions along the mapped horizons to outline the active fault trends. Isopach maps of the Tertiary, Chalk, and Lower Cretaceous sediments reveal the salt movement during this interval with depocenters in the Lauwerszee trough as a result of salt withdrawal and salt diapirism in the areas of structural weakness near existing fault trends. The dip maps at the base of these units show the en-echelon fault pattern and the presence of crestal collapse systems above the salt domes. A comparison between base Cretaceous and base Chalk isopach maps also highlights the presence of inverted Lower Cretaceous basins. By comparing the overburden fault trends with the pre-Zechstein pattern, late faults can be separated from older trends, which has helped the prediction of sealing faults. The regional 3-D data provide a powerful and unambiguous tool to unravel the structural history in the northeast Netherlands.

  20. Facial age cues and emotional expression interact asymmetrically: age cues moderate emotion categorisation.

    PubMed

    Craig, Belinda M; Lipp, Ottmar V

    2017-04-03

    Facial attributes such as race, sex, and age can interact with emotional expressions; however, only a couple of studies have investigated the nature of the interaction between facial age cues and emotional expressions and these have produced inconsistent results. Additionally, these studies have not addressed the mechanism/s driving the influence of facial age cues on emotional expression or vice versa. In the current study, participants categorised young and older adult faces expressing happiness and anger (Experiment 1) or sadness (Experiment 2) by their age and their emotional expression. Age cues moderated categorisation of happiness vs. anger and sadness in the absence of an influence of emotional expression on age categorisation times. This asymmetrical interaction suggests that facial age cues are obligatorily processed prior to emotional expressions. Finding a categorisation advantage for happiness expressed on young faces relative to both anger and sadness which are negative in valence but different in their congruence with old age stereotypes or structural overlap with age cues suggests that the observed influence of facial age cues on emotion perception is due to the congruence between relatively positive evaluations of young faces and happy expressions.

  1. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  2. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Mahoor, Mohammad H.; Mavadati, Seyedmohammad; Rosenwald, Dean P.

    2014-01-01

    Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the “social risk hypothesis” of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science. PMID:24598859

  3. Interpreting text messages with graphic facial expression by deaf and hearing people

    PubMed Central

    Saegusa, Chihiro; Namatame, Miki; Watanabe, Katsumi

    2015-01-01

    In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., “no” tended to mean “no” irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people. PMID:25883582

  4. Social Context Modulates Facial Imitation of Children’s Emotional Expressions

    PubMed Central

    Jap-Tjong, Nadine; Spencer, Hannah; Hofman, Dennis

    2016-01-01

    Children use emotional facial expressions of others for guiding their behavior, a process which is important to a child’s social-emotional development. Earlier studies on facial interaction demonstrate that imitation of emotional expressions of others is automatic, yet can be dynamically modulated depending on contextual information. Considering the value of emotional expressions for children especially, we tested whether and to what extent information about children’s temperament and domestic situation alters mimicry of their emotional expressions. Results show that angry expressions of children displaying negative behavior resulted in stronger imitation, which may serve as a corrective signal. Sad facial expressions resulted in stronger imitation towards those behaving positively but only when exposed to a difficult domestic situation, indicating increased empathy towards these children. These findings shed new light on the dynamic implicit communicative processes that shape interaction with children of different social-emotional backgrounds. PMID:27930714

  5. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  6. A closed-form expression of the positional uncertainty for 3D point clouds.

    PubMed

    Bae, Kwang-Ho; Belton, David; Lichti, Derek D

    2009-04-01

    We present a novel closed-form expression of positional uncertainty measured by a near-monostatic and time-of-flight laser range finder with consideration of its measurement uncertainties. An explicit form of the angular variance of the estimated surface normal vector is also derived. This expression is useful for the precise estimation of the surface normal vector and the outlier detection for finding correspondence in order to register multiple three-dimensional point clouds. Two practical algorithms using these expressions are presented: a method for finding optimal local neighbourhood size which minimizes the variance of the estimated normal vector and a resampling method of point clouds.

  7. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    PubMed

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  8. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    PubMed Central

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335

  9. Exploring the seismic expression of fault zones in 3D seismic volumes

    NASA Astrophysics Data System (ADS)

    Iacopini, D.; Butler, R. W. H.; Purves, S.; McArdle, N.; De Freslon, N.

    2016-08-01

    Mapping and understanding distributed deformation is a major challenge for the structural interpretation of seismic data. However, volumes of seismic signal disturbance with low signal/noise ratio are systematically observed within 3D seismic datasets around fault systems. These seismic disturbance zones (SDZ) are commonly characterized by complex perturbations of the signal and occur at the sub-seismic (10 s m) to seismic scale (100 s m). They may store important information on deformation distributed around those larger scale structures that may be readily interpreted in conventional amplitude displays of seismic data. We introduce a method to detect fault-related disturbance zones and to discriminate between this and other noise sources such as those associated with the seismic acquisition (footprint noise). Two case studies from the Taranaki basin and deep-water Niger delta are presented. These resolve SDZs using tensor and semblance attributes along with conventional seismic mapping. The tensor attribute is more efficient in tracking volumes containing structural displacements while structurally-oriented semblance coherency is commonly disturbed by small waveform variations around the fault throw. We propose a workflow to map and cross-plot seismic waveform signal properties extracted from the seismic disturbance zone as a tool to investigate the seismic signature and explore seismic facies of a SDZ.

  10. Exploring the seismic expression of fault zones in 3D seismic volumes

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2016-04-01

    Mapping and understanding distributed deformation is a major challenge for the structural interpretation of seismic data. However, volumes of seismic signal disturbance with low signal/noise ratio are systematically observed within 3D seismic datasets around fault systems. These seismic disturbance zones (SDZ) are commonly characterized by complex perturbations of the signal and occur at the sub-seismic to seismic scale. They may store important information on deformation distributed around those larger scale structures that may be readily interpreted in conventional amplitude displays of seismic data scale. We introduce a method to detect fault-related disturbance zones and to discriminate between this and other noise sources such as those associated with the seismic acquisition (footprint noise). Two case studies, from the Taranaki basin and deep-water Niger delta are presented. These resolve structure within SDZs using tensor and semblance attributes along with conventional seismic mapping. The tensor attribute is more efficient in tracking volumes containing structural displacements while structurally-oriented semblance coherency is commonly disturbed by small waveform variations around the fault throw. We propose a workflow to map and cross-plot seismic waveform signal properties extracted from the seismic disturbance zone as a tool to investigate the seismic signature and explore seismic facies of a SDZ.

  11. Changes in morphology of actin filaments and expression of alkaline phosphatase at 3D cultivation of MG-63 osteoblast-like cells on mineralized fibroin scaffolds.

    PubMed

    Goncharenko, A V; Malyuchenko, N V; Moisenovich, A M; Kotlyarova, M S; Arkhipova, A Yu; Kon'kov, A S; Agapov, I I; Molochkov, A V; Moisenovich, M M; Kirpichnikov, M P

    2016-09-01

    3D cultivation of MG-63 osteoblast-like cells on mineralized fibroin scaffolds leads to an increase in the expression of alkaline phosphatase, an early marker of bone formation. Increased expression is associated with the actin cytoskeleton reorganization under the influence of 3D cultivation and osteogenic calcium phosphate component of the microcarrier.

  12. Does Facial Expressivity Count? How Typically Developing Children Respond Initially to Children with Autism

    ERIC Educational Resources Information Center

    Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-01-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…

  13. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  14. Personal identification by the comparison of facial profiles: testing the reliability of a high-resolution 3D-2D comparison model.

    PubMed

    Cattaneo, Cristina; Cantatore, Angela; Ciaffi, Romina; Gibelli, Daniele; Cigada, Alfredo; De Angelis, Danilo; Sala, Remo

    2012-01-01

    Identification from video surveillance systems is frequently requested in forensic practice. The "3D-2D" comparison has proven to be reliable in assessing identification but still requires standardization; this study concerns the validation of the 3D-2D profile comparison. The 3D models of the faces of five individuals were compared with photographs from the same subjects as well as from another 45 individuals. The difference in area and distance between maxima (glabella, tip of nose, fore point of upper and lower lips, pogonion) and minima points (selion, subnasale, stomion, suprapogonion) were measured. The highest difference in area between the 3D model and the 2D image was between 43 and 133 mm(2) in the five matches, always greater than 157 mm(2) in mismatches; the mean distance between the points was greater than 1.96 mm in mismatches, <1.9 mm in five matches (p < 0.05). These results indicate that this difference in areas may point toward a manner of distinguishing "correct" from "incorrect" matches.

  15. Posed versus spontaneous facial expressions are modulated by opposite cerebral hemispheres.

    PubMed

    Ross, Elliott D; Pulusu, Vinay K

    2013-05-01

    Clinical research has indicated that the left face is more expressive than the right face, suggesting that modulation of facial expressions is lateralized to the right hemisphere. The findings, however, are controversial because the results explain, on average, approximately 4% of the data variance. Using high-speed videography, we sought to determine if movement-onset asymmetry was a more powerful research paradigm than terminal movement asymmetry. The results were very robust, explaining up to 70% of the data variance. Posed expressions began overwhelmingly on the right face whereas spontaneous expressions began overwhelmingly on the left face. This dichotomy was most robust for upper facial expressions. In addition, movement-onset asymmetries did not predict terminal movement asymmetries, which were not significantly lateralized. The results support recent neuroanatomic observations that upper versus lower facial movements have different forebrain motor representations and recent behavioral constructs that posed versus spontaneous facial expressions are modulated preferentially by opposite cerebral hemispheres and that spontaneous facial expressions are graded rather than non-graded movements.

  16. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces.

  17. Analysis and evaluation of facial expression and perceived age for designing automotive frontal views

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takayuki; Kawasumi, Mikiko; Koshimizu, Hiroyasu

    2007-01-01

    We propose a method for quantifying the design of automotive frontal view based on the research on the human visual impression to the facial expression. We have researched to evaluate the automotive frontal face by using the facial words and the perceived age. Then we verified experimentally how effectively the line drawing image could work and coche-PICASSO image could be used for the image stimulation. As a result of this paper, a part of the facial words could be strongly correlated to both the facial expressions and the perceived age in the line drawing image. Besides, it was also known that the perceived age in the coche-PICASSO image was always younger than those of the line drawing image.

  18. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  19. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  20. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark; Knowles, David W.; Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2011-03-30

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.

  1. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    PubMed

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  2. Depth-expression characteristics of multi-projection 3D display systems [invited].

    PubMed

    Park, Soon-gi; Hong, Jong-Young; Lee, Chang-Kun; Miranda, Matheus; Kim, Youngmin; Lee, Byoungho

    2014-09-20

    A multi-projection display consists of multiple projection units. Because of the large amount of data, a multi-projection system shows large, high-quality images. According to the projection geometry and the optical configuration, multi-projection systems show different viewing characteristics for generated three-dimensional images. In this paper, we analyzed the various projection geometries of multi-projection systems, and explained the different depth-expression characteristics for each individual projection geometry. We also demonstrated the depth-expression characteristic of an experimental multi-projection system.

  3. Lossless 3-D reconstruction and registration of semi-quantitative gene expression data in the mouse brain

    PubMed Central

    Enlow, Matthew A.; Ju, Tao; Kakadiaris, Ioannis A.; Carson, James P.

    2012-01-01

    As imaging, computing, and data storage technologies improve, there is an increasing opportunity for multiscale analysis of three-dimensional datasets (3-D). Such analysis enables, for example, microscale elements of multiple macroscale specimens to be compared throughout the entire macroscale specimen. Spatial comparisons require bringing datasets into co-alignment. One approach for co-alignment involves elastic deformations of data in addition to rigid alignments. The elastic deformations distort space, and if not accounted for, can distort the information at the microscale. The algorithms developed in this work address this issue by allowing multiple data points to be encoded into a single image pixel, appropriately tracking each data point to ensure lossless data mapping during elastic spatial deformation. This approach was developed and implemented for both 2-D and 3-D registration of images. Lossless reconstruction and registration was applied to semi-quantitative cellular gene expression data in the mouse brain, enabling comparison of multiple spatially registered 3-D datasets without any augmentation of the cellular data. Standard reconstruction and registration without the lossless approach resulted in errors in cellular quantities of ~ 8%. PMID:22256218

  4. Lossless 3-D reconstruction and registration of semi-quantitative gene expression data in the mouse brain.

    PubMed

    Enlow, Matthew A; Ju, Tao; Kakadiaris, Ioannis A; Carson, James P

    2011-01-01

    As imaging, computing, and data storage technologies improve, there is an increasing opportunity for multiscale analysis of three-dimensional datasets (3-D). Such analysis enables, for example, microscale elements of multiple macroscale specimens to be compared throughout the entire macroscale specimen. Spatial comparisons require bringing datasets into co-alignment. One approach for co-alignment involves elastic deformations of data in addition to rigid alignments. The elastic deformations distort space, and if not accounted for, can distort the information at the microscale. The algorithms developed in this work address this issue by allowing multiple data points to be encoded into a single image pixel, appropriately tracking each data point to ensure lossless data mapping during elastic spatial deformation. This approach was developed and implemented for both 2-D and 3D registration of images. Lossless reconstruction and registration was applied to semi-quantitative cellular gene expression data in the mouse brain, enabling comparison of multiple spatially registered 3-D datasets without any augmentation of the cellular data. Standard reconstruction and registration without the lossless approach resulted in errors in cellular quantities of ∼ 8%.

  5. Quantitative analysis of Euclidean distance to complement qualitative analysis of facial expression during deception

    PubMed Central

    Mondal, Ananya; Mukhopadhyay, Pritha; Basu, Nabanita; Bandyopadhyay, Samir Kumar; Chatterjee, Tanima

    2016-01-01

    Background: Accurate evaluation of an individuals' veracity is a fundamental aspect of social functioning that allows individuals to act in adaptive ways. The domain of deception detection ability is still young, and many components in this field are yet to be touched which demands more research in this field. Aims: The present study aims at deciphering the structural composition of face during felt, posed, and deceived emotions in facial expression unique to Indian culture, using Facial Action Coding System (FACS). Quantitative analysis of Euclidean distance has been done to complement qualitative FACS analysis. Methods: In this study, thirty female, young adults with age range of 23–27 years were chosen randomly for portraying their (felt, posed, and deceived) facial expression. All facial expressions were captured through instruction, and videos were converted into static images. The static images were coded on the basis of FACS to decipher the felt, posed, and deceived expressions. Quantitative analysis of the data has been done using MATLAB to meet the objectives of the study and to complement the qualitative analysis. Results: Felt and posed emotions differ in terms of intensity of the expression and subjective experience. Posed emotional and deceived expressions differ in intent. Facial asymmetry is an important indicator for detecting deception. PMID:28163412

  6. Long-term academic stress enhances early processing of facial expressions.

    PubMed

    Zhang, Liang; Qin, Shaozheng; Yao, Zhuxi; Zhang, Kan; Wu, Jianhui

    2016-11-01

    Exposure to long-term stress can lead to a variety of emotional and behavioral problems. Although widely investigated, the neural basis of how long-term stress impacts emotional processing in humans remains largely elusive. Using event-related brain potentials (ERPs), we investigated the effects of long-term stress on the neural dynamics of emotionally facial expression processing. Thirty-nine male college students undergoing preparation for a major examination and twenty-one matched controls performed a gender discrimination task for faces displaying angry, happy, and neutral expressions. The results of the Perceived Stress Scale showed that participants in the stress group perceived higher levels of long-term stress relative to the control group. ERP analyses revealed differential effects of long-term stress on two early stages of facial expression processing: 1) long-term stress generally augmented posterior P1 amplitudes to facial stimuli irrespective of expression valence, suggesting that stress can increase sensitization to visual inputs in general, and 2) long-term stress selectively augmented fronto-central P2 amplitudes for angry but not for neutral or positive facial expressions, suggesting that stress may lead to increased attentional prioritization to processing negative emotional stimuli. Together, our findings suggest that long-term stress has profound impacts on the early stages of facial expression processing, with an increase at the very early stage of general information inputs and a subsequent attentional bias toward processing emotionally negative stimuli.

  7. Young Infants Match Facial and Vocal Emotional Expressions of Other Infants

    PubMed Central

    Vaillant-Molina, Mariana; Bahrick, Lorraine E.; Flom, Ross

    2013-01-01

    Research has demonstrated that infants recognize emotional expressions of adults in the first half-year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5- and 5-month-old infants heard a series of infant vocal expressions (positive and negative affect) along with side-by-side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5-month-olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5-month-olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face-voice synchrony, temporal or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice. PMID:24302853

  8. Young Infants Match Facial and Vocal Emotional Expressions of Other Infants.

    PubMed

    Vaillant-Molina, Mariana; Bahrick, Lorraine E; Flom, Ross

    2013-08-01

    Research has demonstrated that infants recognize emotional expressions of adults in the first half-year of life. We extended this research to a new domain, infant perception of the expressions of other infants. In an intermodal matching procedure, 3.5- and 5-month-old infants heard a series of infant vocal expressions (positive and negative affect) along with side-by-side dynamic videos in which one infant conveyed positive facial affect and another infant conveyed negative facial affect. Results demonstrated that 5-month-olds matched the vocal expressions with the affectively congruent facial expressions, whereas 3.5-month-olds showed no evidence of matching. These findings indicate that by 5 months of age, infants detect, discriminate, and match the facial and vocal affective displays of other infants. Further, because the facial and vocal expressions were portrayed by different infants and shared no face-voice synchrony, temporal or intensity patterning, matching was likely based on detection of a more general affective valence common to the face and voice.

  9. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    PubMed

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  10. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing.

  11. Sex differences in the perception of affective facial expressions: do men really lack emotional sensitivity?

    PubMed

    Montagne, Barbara; Kessels, Roy P C; Frigerio, Elisa; de Haan, Edward H F; Perrett, David I

    2005-06-01

    There is evidence that men and women display differences in both cognitive and affective functions. Recent studies have examined the processing of emotions in males and females. However, the findings are inconclusive, possibly the result of methodological differences. The aim of this study was to investigate the perception of emotional facial expressions in men and women. Video clips of neutral faces, gradually morphing into full-blown expressions were used. By doing this, we were able to examine both the accuracy and the sensitivity in labelling emotional facial expressions. Furthermore, all participants completed an anxiety and a depression rating scale. Research participants were 40 female students and 28 male students. Results revealed that men were less accurate, as well as less sensitive in labelling facial expressions. Thus, men show an overall worse performance compared to women on a task measuring the processing of emotional faces. This result is discussed in relation to recent findings.

  12. In Your Face: Startle to Emotional Facial Expressions Depends on Face Direction

    PubMed Central

    Michalsen, Henriette; Øvervoll, Morten

    2017-01-01

    Although faces are often included in the broad category of emotional visual stimuli, the affective impact of different facial expressions is not well documented. The present experiment investigated startle electromyographic responses to pictures of neutral, happy, angry, and fearful facial expressions, with a frontal face direction (directed) and at a 45° angle to the left (averted). Results showed that emotional facial expressions interact with face direction to produce startle potentiation: Greater responses were found for angry expressions, compared with fear and neutrality, with directed faces. When faces were averted, fear and neutrality produced larger responses compared with anger and happiness. These results are in line with the notion that startle is potentiated to stimuli signaling threat. That is, a forward directed angry face may signal a threat toward the observer, and a fearful face directed to the side may signal a possible threat in the environment. PMID:28321290

  13. In Your Face: Startle to Emotional Facial Expressions Depends on Face Direction.

    PubMed

    Åsli, Ole; Michalsen, Henriette; Øvervoll, Morten

    2017-01-01

    Although faces are often included in the broad category of emotional visual stimuli, the affective impact of different facial expressions is not well documented. The present experiment investigated startle electromyographic responses to pictures of neutral, happy, angry, and fearful facial expressions, with a frontal face direction (directed) and at a 45° angle to the left (averted). Results showed that emotional facial expressions interact with face direction to produce startle potentiation: Greater responses were found for angry expressions, compared with fear and neutrality, with directed faces. When faces were averted, fear and neutrality produced larger responses compared with anger and happiness. These results are in line with the notion that startle is potentiated to stimuli signaling threat. That is, a forward directed angry face may signal a threat toward the observer, and a fearful face directed to the side may signal a possible threat in the environment.

  14. Facial EMG responses to emotional expressions are related to emotion perception ability.

    PubMed

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  15. Gaze behavior predicts memory bias for angry facial expressions in stable dysphoria.

    PubMed

    Wells, Tony T; Beevers, Christopher G; Robison, Adrienne E; Ellis, Alissa J

    2010-12-01

    Interpersonal theories suggest that depressed individuals are sensitive to signs of interpersonal rejection, such as angry facial expressions. The present study examined memory bias for happy, sad, angry, and neutral facial expressions in stably dysphoric and stably nondysphoric young adults. Participants' gaze behavior (i.e., fixation duration, number of fixations, and distance between fixations) while viewing these facial expressions was also assessed. Using signal detection analyses, the dysphoric group had better accuracy on a surprise recognition task for angry faces than the nondysphoric group. Further, mediation analyses indicated that greater breadth of attentional focus (i.e., distance between fixations) accounted for enhanced recall of angry faces among the dysphoric group. There were no differences between dysphoria groups in gaze behavior or memory for sad, happy, or neutral facial expressions. Findings from this study identify a specific cognitive mechanism (i.e., breadth of attentional focus) that accounts for biased recall of angry facial expressions in dysphoria. This work also highlights the potential for integrating cognitive and interpersonal theories of depression.

  16. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    PubMed

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  17. Facial expression: An under-utilised tool for the assessment of welfare in mammals.

    PubMed

    Descovich, Kris A; Wathan, Jennifer; Leach, Matthew C; Buchanan-Smith, Hannah M; Flecknell, Paul; Farningham, David; Vick, Sarah-Jane

    2017-02-08

    Animal welfare is a key issue for industries that use or impact upon animals. The accurate identification of welfare states is particularly relevant to the field of bioscience, where the 3Rs framework encourages refinement of experimental procedures involving animal models. The assessment and improvement of welfare states in animals is reliant on reliable and valid measurement tools. Behavioural measures (activity, attention, posture and vocalisation) are frequently used because they are immediate and non-invasive, however no single indicator can yield a complete picture of the internal state of an animal. Facial expressions are extensively studied in humans as a measure of psychological and emotional experiences but are infrequently used in animal studies, with the exception of emerging research on pain behaviour. In this review, we discuss current evidence for facial representations of underlying affective states, and how communicative or functional expressions can be useful within welfare assessments. Validated tools for measuring facial movement are outlined, and the potential of expressions as honest signals are discussed, alongside other challenges and limitations to facial expression measurement within the context of animal welfare. We conclude that facial expression determination in animals is a useful but underutilised measure that complements existing tools in the assessment of welfare.

  18. Facial EMG Responses to Emotional Expressions Are Related to Emotion Perception Ability

    PubMed Central

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective. PMID:24489647

  19. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    PubMed

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  20. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition

    PubMed Central

    de Gelder, Beatrice; Huis in ‘t Veld, Elisabeth M. J.; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST. PMID:26579004

  1. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    ERIC Educational Resources Information Center

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  2. Signal Characteristics of Spontaneous Facial Expressions: Automatic Movement in Solitary and Social Smiles

    PubMed Central

    Schmidt, Karen L.; Cohn, Jeffrey F.; Tian, Yingli

    2009-01-01

    The assumption that the smile is an evolved facial display suggests that there may be universal features of smiling in addition to the basic facial configuration. We show that smiles include not only a stable configuration of features, but also temporally consistent movement patterns. In spontaneous smiles from two social contexts, duration of lip corner movement during the onset phase was independent of social context and the presence of other facial movements, including dampening. These additional movements produced variation in both peak and offset duration. Both onsets and offsets had dynamic properties similar to automatically controlled movements, with a consistent relation between maximum velocity and amplitude of lip corner movement in smiles from two distinct contexts. Despite the effects of individual and social factors on facial expression timing overall, consistency in onset and offset phases suggests that portions of the smile display are relatively stereotyped and may be automatically produced. PMID:14638288

  3. Muscles of facial expression in the chimpanzee (Pan troglodytes): descriptive, comparative and phylogenetic contexts

    PubMed Central

    Burrows, Anne M; Waller, Bridget M; Parr, Lisa A; Bonar, Christopher J

    2006-01-01

    Facial expressions are a critical mode of non-vocal communication for many mammals, particularly non-human primates. Although chimpanzees (Pan troglodytes) have an elaborate repertoire of facial signals, little is known about the facial expression (i.e. mimetic) musculature underlying these movements, especially when compared with some other catarrhines. Here we present a detailed description of the facial muscles of the chimpanzee, framed in comparative and phylogenetic contexts, through the dissection of preserved faces using a novel approach. The arrangement and appearance of muscles were noted and compared with previous studies of chimpanzees and with prosimians, cercopithecoids and humans. The results showed 23 mimetic muscles in P. troglodytes, including a thin sphincter colli muscle, reported previously only in adult prosimians, a bi-layered zygomaticus major muscle and a distinct risorius muscle. The presence of these muscles in such definition supports previous studies that describe an elaborate and highly graded facial communication system in this species that remains qualitatively different from that reported for other non-human primate species. In addition, there are minimal anatomical differences between chimpanzees and humans, contrary to conclusions from previous studies. These results amplify the importance of understanding facial musculature in primate taxa, which may hold great taxonomic value. PMID:16441560

  4. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  5. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  6. Strategies for Perceiving Facial Expressions in Adults with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.

    2014-01-01

    Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…

  7. Production of Emotional Facial Expressions in European American, Japanese, and Chinese Infants.

    ERIC Educational Resources Information Center

    Camras, Linda A.; And Others

    1998-01-01

    European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants, suggesting that differences in expressivity between European…

  8. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    ERIC Educational Resources Information Center

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  9. Nine-year-old children use norm-based coding to visually represent facial expression.

    PubMed

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based.

  10. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  11. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    ERIC Educational Resources Information Center

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  12. Singing emotionally: a study of pre-production, production, and post-production facial expressions

    PubMed Central

    Quinto, Lena R.; Thompson, William F.; Kroos, Christian; Palmer, Caroline

    2014-01-01

    Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as “neutral.” An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization. PMID:24808868

  13. 3D fast wavelet network model-assisted 3D face recognition

    NASA Astrophysics Data System (ADS)

    Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2015-12-01

    In last years, the emergence of 3D shape in face recognition is due to its robustness to pose and illumination changes. These attractive benefits are not all the challenges to achieve satisfactory recognition rate. Other challenges such as facial expressions and computing time of matching algorithms remain to be explored. In this context, we propose our 3D face recognition approach using 3D wavelet networks. Our approach contains two stages: learning stage and recognition stage. For the training we propose a novel algorithm based on 3D fast wavelet transform. From 3D coordinates of the face (x,y,z), we proceed to voxelization to get a 3D volume which will be decomposed by 3D fast wavelet transform and modeled after that with a wavelet network, then their associated weights are considered as vector features to represent each training face . For the recognition stage, an unknown identity face is projected on all the training WN to obtain a new vector features after every projection. A similarity score is computed between the old and the obtained vector features. To show the efficiency of our approach, experimental results were performed on all the FRGC v.2 benchmark.

  14. The processing of facial identity and expression is interactive, but dependent on task and experience

    PubMed Central

    Yankouskaya, Alla; Humphreys, Glyn W.; Rotshtein, Pia

    2014-01-01

    Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the “Garner” paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field. PMID:25452722

  15. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  16. Explicit recognition of emotional facial expressions is shaped by expertise: evidence from professional actors

    PubMed Central

    Conson, Massimiliano; Ponari, Marta; Monteforte, Eva; Ricciato, Giusy; Sarà, Marco; Grossi, Dario; Trojano, Luigi

    2013-01-01

    Can reading others' emotional states be shaped by expertise? We assessed processing of emotional facial expressions in professional actors trained either to voluntary activate mimicry to reproduce character's emotions (as foreseen by the “Mimic Method”), or to infer others' inner states from reading the emotional context (as foreseen by “Stanislavski Method”). In explicit recognition of facial expressions (Experiment 1), the two experimental groups differed from each other and from a control group with no acting experience: the Mimic group was more accurate, whereas the Stanislavski group was slower. Neither acting experience, instead, influenced implicit processing of emotional faces (Experiment 2). We argue that expertise can selectively influence explicit recognition of others' facial expressions, depending on the kind of “emotional expertise”. PMID:23825467

  17. Explicit recognition of emotional facial expressions is shaped by expertise: evidence from professional actors.

    PubMed

    Conson, Massimiliano; Ponari, Marta; Monteforte, Eva; Ricciato, Giusy; Sarà, Marco; Grossi, Dario; Trojano, Luigi

    2013-01-01

    Can reading others' emotional states be shaped by expertise? We assessed processing of emotional facial expressions in professional actors trained either to voluntary activate mimicry to reproduce character's emotions (as foreseen by the "Mimic Method"), or to infer others' inner states from reading the emotional context (as foreseen by "Stanislavski Method"). In explicit recognition of facial expressions (Experiment 1), the two experimental groups differed from each other and from a control group with no acting experience: the Mimic group was more accurate, whereas the Stanislavski group was slower. Neither acting experience, instead, influenced implicit processing of emotional faces (Experiment 2). We argue that expertise can selectively influence explicit recognition of others' facial expressions, depending on the kind of "emotional expertise".

  18. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  19. A selective emotional decision-making bias elicited by facial expressions.

    PubMed

    Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B

    2012-01-01

    Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices.

  20. Improving subspace learning for facial expression recognition using person dependent and geometrically enriched training sets.

    PubMed

    Maronidis, Anastasios; Bolis, Dimitris; Tefas, Anastasios; Pitas, Ioannis

    2011-10-01

    In this paper, the robustness of appearance-based subspace learning techniques in geometrical transformations of the images is explored. A number of such techniques are presented and tested using four facial expression databases. A strong correlation between the recognition accuracy and the image registration error has been observed. Although it is common-knowledge that appearance-based methods are sensitive to image registration errors, there is no systematic experiment reported in the literature. As a result of these experiments, the training set enrichment with translated, scaled and rotated images is proposed for confronting the low robustness of these techniques in facial expression recognition. Moreover, person dependent training is proven to be much more accurate for facial expression recognition than generic learning.

  1. Neural substrates of human facial expression of pleasant emotion induced by comic films: a PET Study.

    PubMed

    Iwase, Masao; Ouchi, Yasuomi; Okada, Hiroyuki; Yokoyama, Chihiro; Nobezawa, Shuji; Yoshikawa, Etsuji; Tsukada, Hideo; Takeda, Masaki; Yamashita, Ko; Takeda, Masatoshi; Yamaguti, Kouzi; Kuratsune, Hirohiko; Shimizu, Akira; Watanabe, Yasuyoshi

    2002-10-01

    Laughter or smile is one of the emotional expressions of pleasantness with characteristic contraction of the facial muscles, of which the neural substrate remains to be explored. This currently described study is the first to investigate the generation of human facial expression of pleasant emotion using positron emission tomography and H(2)(15)O. Regional cerebral blood flow (rCBF) during laughter/smile induced by visual comics and the magnitude of laughter/smile indicated significant correlation in the bilateral supplementary motor area (SMA) and left putamen (P < 0.05, corrected), but no correlation in the primary motor area (M1). In the voluntary facial movement, significant correlation between rCBF and the magnitude of EMG was found in the face area of bilateral M1 and the SMA (P < 0.001, uncorrected). Laughter/smile, as opposed to voluntary movement, activated the visual association areas, left anterior temporal cortex, left uncus, and orbitofrontal and medial prefrontal cortices (P < 0.05, corrected), whereas voluntary facial movement generated by mimicking a laughing/smiling face activated the face area of the left M1 and bilateral SMA, compared with laughter/smile (P < 0.05, corrected). We demonstrated distinct neural substrates of emotional and volitional facial expression and defined cognitive and experiential processes of a pleasant emotion, laughter/smile.

  2. Specific impairments in the recognition of emotional facial expressions in Parkinson's disease.

    PubMed

    Clark, Uraina S; Neargarder, Sandy; Cronin-Golomb, Alice

    2008-01-01

    Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson's disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performance on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD.

  3. Distributed representations of dynamic facial expressions in the superior temporal sulcus

    PubMed Central

    Said, Christopher P.; Moore, Christopher D.; Engell, Andrew D.; Todorov, Alexander; Haxby, James V.

    2011-01-01

    Previous research on the superior temporal sulcus (STS) has shown that it responds more to facial expressions than to neutral faces. Here, we extend our understanding of the STS in two ways. First, using targeted high-resolution fMRI measurements of the lateral cortex and multivoxel pattern analysis, we show that the response to seven categories of dynamic facial expressions can be decoded in both the posterior STS (pSTS) and anterior STS (aSTS). We were also able to decode patterns corresponding to these expressions in the frontal operculum (FO), a structure that has also been shown to respond to facial expressions. Second, we measured the similarity structure of these representations and found that the similarity structure in the pSTS significantly correlated with the perceptual similarity structure of the expressions. This was the case regardless of whether we used pattern classification or more traditional correlation techniques to extract the neural similarity structure. These results suggest that distributed representations in the pSTS could underlie the perception of facial expressions. PMID:20616141

  4. Facial Feedback Affects Perceived Intensity but Not Quality of Emotional Expressions

    PubMed Central

    Lobmaier, Janek S.; Fischer, Martin H.

    2015-01-01

    Motivated by conflicting evidence in the literature, we re-assessed the role of facial feedback when detecting quantitative or qualitative changes in others’ emotional expressions. Fifty-three healthy adults observed self-paced morph sequences where the emotional facial expression either changed quantitatively (i.e., sad-to-neutral, neutral-to-sad, happy-to-neutral, neutral-to-happy) or qualitatively (i.e. from sad to happy, or from happy to sad). Observers held a pen in their own mouth to induce smiling or frowning during the detection task. When morph sequences started or ended with neutral expressions we replicated a congruency effect: Happiness was perceived longer and sooner while smiling; sadness was perceived longer and sooner while frowning. Interestingly, no such congruency effects occurred for transitions between emotional expressions. These results suggest that facial feedback is especially useful when evaluating the intensity of a facial expression, but less so when we have to recognize which emotion our counterpart is expressing. PMID:26343732

  5. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions.

  6. Generalized hostile interpretation bias regarding facial expressions: Characteristic of pathological aggressive behavior.

    PubMed

    Smeijers, Danique; Rinck, Mike; Bulten, Erik; van den Heuvel, Thom; Verkes, Robbert-Jan

    2017-02-12

    Individuals with aggression regulation disorders tend to attribute hostility to others in socially ambiguous situations. Previous research suggests that this "hostile attribution bias" is a powerful cause of aggression. Facial expressions form important cues in the appreciation of others' intentions. Furthermore, accurate processing of facial expressions is fundamental to normal socialization. However, research on interpretation biases in facial affect is limited. It is asserted that a hostile interpretation bias (HIB) is likely to be displayed by individuals with an antisocial (ASPD) and borderline personality disorder (BPD) and probably also with an intermittent explosive disorder (IED). However, there is little knowledge to what extent this bias is displayed by each of these patient groups. The present study investigated whether a HIB regarding emotional facial expressions was displayed by forensic psychiatric outpatients (FPOs) and whether it was associated with ASPD and BPD in general or, more specifically, with a disposition to react with pathological aggression. Participants of five different groups were recruited: FPOs with ASPD, BPD, or IED, non-forensic patients with BPD (nFPOs-BPD), and healthy, non-aggressive controls (HCs). Results suggest that solely FPOs with ASPD, BPD, or IED exhibit a HIB regarding emotional facial expressions. Moreover, this bias was associated with type and severity of aggression, trait aggression, and cognitive distortions. The results suggest that a HIB regarding facial expressions is an important characteristic of pathological aggressive behavior. Interventions that modify the HIB might help to reduce the recurrence of aggression. Aggr. Behav. 9999:1-12, 2017. © 2017 Wiley Periodicals, Inc.

  7. Intermodal Perception of Fully Illuminated and Point Light Displays of Dynamic Facial Expressions by 7-Month-Old Infants.

    ERIC Educational Resources Information Center

    Soken, Nelson; And Others

    This study considered two questions about infants' perception of affective expressions: (1) Can infants distinguish between happiness and anger on the basis of facial motion information alone? (2) Can infants detect a correspondence between happy and angry facial and vocal expressions by different people? A total of 40 infants of 7 months of age…

  8. Social Adjustment, Academic Adjustment, and the Ability to Identify Emotion in Facial Expressions of 7-Year-Old Children

    ERIC Educational Resources Information Center

    Goodfellow, Stephanie; Nowicki, Stephen, Jr.

    2009-01-01

    The authors aimed to examine the possible association between (a) accurately reading emotion in facial expressions and (b) social and academic competence among elementary school-aged children. Participants were 840 7-year-old children who completed a test of the ability to read emotion in facial expressions. Teachers rated children's social and…

  9. Emotional Facial and Vocal Expressions during Story Retelling by Children and Adolescents with High-Functioning Autism

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen

    2013-01-01

    Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were 22…

  10. Perceptions of social dominance through facial emotion expressions in euthymic patients with bipolar I disorder.

    PubMed

    Kim, Sung Hwa; Ryu, Vin; Ha, Ra Yeon; Lee, Su Jin; Cho, Hyun-Sang

    2016-04-01

    The ability to accurately perceive dominance in the social hierarchy is important for successful social interactions. However, little is known about dominance perception of emotional stimuli in bipolar disorder. The aim of this study was to investigate the perception of social dominance in patients with bipolar I disorder in response to six facial emotional expressions. Participants included 35 euthymic patients and 45 healthy controls. Bipolar patients showed a lower perception of social dominance based on anger, disgust, fear, and neutral facial emotional expressions compared to healthy controls. A negative correlation was observed between motivation to pursue goals or residual manic symptoms and perceived dominance of negative facial emotions such as anger, disgust, and fear in bipolar patients. These results suggest that bipolar patients have an altered perception of social dominance that might result in poor interpersonal functioning. Training of appropriate dominance perception using various emotional stimuli may be helpful in improving social relationships for individuals with bipolar disorder.

  11. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    PubMed

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  12. DNA vaccines expressing soluble CD4-envelope proteins fused to C3d elicit cross-reactive neutralizing antibodies to HIV-1

    SciTech Connect

    Bower, Joseph F.; Green, Thomas D.; Ross, Ted M. . E-mail: tmr15@pitt.edu

    2004-10-25

    DNA vaccines expressing the envelope (Env) of the human immunodeficiency virus type 1 (HIV-1) have been relatively ineffective at generating high-titer, long-lasting, neutralizing antibodies in a variety of animal models. In this study, DNA vaccines were constructed to express a fusion protein of the soluble human CD4 (sCD4) and the gp120 subunit of the HIV-1 envelope. To enhance the immunogenicity of the expressed fusion protein, three copies of the murine C3d (mC3d{sub 3}) were added to the carboxyl terminus of the complex. Monoclonal antibodies that recognize CD4-induced epitopes on gp120 efficiently bound to sCD4-gp120 or sCD4-gp120-mC3d{sub 3}. In addition, both sCD4-gp120 and sCD4-gp120-mC3d{sub 3} bound to cells expressing appropriate coreceptors in the absence of cell surface hCD4. Mice (BALB/c) vaccinated with DNA vaccines expressing either gp120-mC3d{sub 3} or sCD4-gp120-mC3d{sub 3} elicited antibodies that neutralized homologous virus infection. However, the use of sCD4-gp120-mC3d{sub 3}-DNA elicited the highest titers of neutralizing antibodies that persisted after depletion of anti-hCD4 antibodies. Interestingly, only mice vaccinated with DNA expressing sCD4-gp120-mC3d{sub 3} had antibodies that elicited cross-protective neutralizing antibodies. The fusion of sCD4 to the HIV-1 envelope exposes neutralizing epitopes that elicit broad protective immunity when the fusion complex is coupled with the molecular adjuvant, C3d.

  13. Single-trial ERP analysis reveals facial expression category in a three-stage scheme.

    PubMed

    Zhang, Dandan; Luo, Wenbo; Luo, Yuejia

    2013-05-28

    Emotional faces are salient stimuli that play a critical role in social interactions. Following up on previous research suggesting that the event-related potentials (ERPs) show differential amplitudes in response to various facial expressions, the current study used trial-to-trial variability assembled from six discriminating ERP components to predict the facial expression categories in individual trials. In an experiment involved 17 participants, fearful trials were differentiated from non-fearful trials as early as the intervals of N1 and P1, with a mean predictive accuracy of 87%. Single-trial features in the occurrence of N170 and vertex positive potential could distinguish between emotional and neutral expressions (accuracy=90%). Finally, the trials associated with fearful, happy, and neutral faces were completely separated during the window of N3 and P3 (accuracy=83%). These categorization findings elucidated the temporal evolution of facial expression extraction, and demonstrated that the spatio-temporal characteristics of single-trial ERPs can distinguish facial expressions according to a three-stage scheme, with "fear popup," "emotional/unemotional discrimination," and "complete separation" as processing stages. This work constitutes the first examination of neural processing dynamics beyond multitrial ERP averaging, and directly relates the prediction performance of single-trial classifiers to the progressive brain functions of emotional face discrimination.

  14. Time-Delay Neural Network for Continuous Emotional Dimension Prediction From Facial Expression Sequences.

    PubMed

    Meng, Hongying; Bianchi-Berthouze, Nadia; Deng, Yangdong; Cheng, Jinkuang; Cosmas, John P

    2016-04-01

    Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a time-delay neural network (TDNN) is proposed to model the temporal relationships between consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic facial expressions. The proposed approach has won the affect recognition sub-challenge of the Third International Audio/Visual Emotion Recognition Challenge.

  15. Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations.

    PubMed

    Wenzler, Sofia; Levine, Sarah; van Dick, Rolf; Oertel-Knöchel, Viola; Aviezer, Hillel

    2016-09-01

    According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary people's spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record

  16. Multiple faces of pain: effects of chronic pain on the brain regulation of facial expression.

    PubMed

    Vachon-Presseau, Etienne; Roy, Mathieu; Woo, Choong-Wan; Kunz, Miriam; Martel, Marc-Olivier; Sullivan, Michael J; Jackson, Philip L; Wager, Tor D; Rainville, Pierre

    2016-08-01

    Pain behaviors are shaped by social demands and learning processes, and chronic pain has been previously suggested to affect their meaning. In this study, we combined functional magnetic resonance imaging with in-scanner video recording during thermal pain stimulations and use multilevel mediation analyses to study the brain mediators of pain facial expressions and the perception of pain intensity (self-reports) in healthy individuals and patients with chronic back pain (CBP). Behavioral data showed that the relation between pain expression and pain report was disrupted in CBP. In both patients with CBP and healthy controls, brain activity varying on a trial-by-trial basis with pain facial expressions was mainly located in the primary motor cortex and completely dissociated from the pattern of brain activity varying with pain intensity ratings. Stronger activity was observed in CBP specifically during pain facial expressions in several nonmotor brain regions such as the medial prefrontal cortex, the precuneus, and the medial temporal lobe. In sharp contrast, no moderating effect of chronic pain was observed on brain activity associated with pain intensity ratings. Our results demonstrate that pain facial expressions and pain intensity ratings reflect different aspects of pain processing and support psychosocial models of pain suggesting that distinctive mechanisms are involved in the regulation of pain behaviors in chronic pain.

  17. Investigating the brain basis of facial expression perception using multi-voxel pattern analysis.

    PubMed

    Wegrzyn, Martin; Riehle, Marcel; Labudda, Kirsten; Woermann, Friedrich; Baumgartner, Florian; Pollmann, Stefan; Bien, Christian G; Kissler, Johanna

    2015-08-01

    Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised.

  18. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    PubMed

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar.

  19. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    PubMed Central

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  20. Does Parkinson's disease lead to alterations in the facial expression of pain?

    PubMed

    Priebe, Janosch A; Kunz, Miriam; Morcinek, Christian; Rieckmann, Peter; Lautenbacher, Stefan

    2015-12-15

    Hypomimia which refers to a reduced degree in facial expressiveness is a common sign in Parkinson's disease (PD). The objective of our study was to investigate how hypomimia affects PD patients' facial expression of pain. The facial expressions of 23 idiopathic PD patients in the Off-phase (without dopaminergic medication) and On-phase (after dopaminergic medication intake) and 23 matched controls in response to phasic heat-pain and a temporal summation procedure were recorded and analyzed for overall and specific alterations using the Facial Action Coding System (FACS). We found reduced overall facial activity in response to pain in PD patients in the Off which was less pronounced in the On. Especially the highly pain-relevant eye-narrowing occurred less frequently in PD patients than in controls in both phases while frequencies of other pain-relevant movements, like upper lip raise (in the On) and contraction of the eyebrows (in both phases), did not differ between groups. Moreover, opening of the mouth (which is often not considered as pain-relevant) was the most frequently displayed movement in PD patients, whereas eye-narrowing was the most frequent movement in controls. Not only overall quantitative changes in the degree of facial pain expressiveness occurred in PD patients but also qualitative changes were found. The latter refer to a strongly affected encoding of the sensory dimension of pain (eye-narrowing) while the encoding of the affective dimension of pain (contradiction of the eyebrows) was preserved. This imbalanced pain signal might affect pain communication and pain assessment.

  1. The role of spatial frequency information in the recognition of facial expressions of pain.

    PubMed

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features.

  2. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    PubMed

    Palumbo, Letizia; Jellema, Tjeerd

    2013-01-01

    Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  3. Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions.

    PubMed

    Pitcher, David; Garrido, Lúcia; Walsh, Vincent; Duchaine, Bradley C

    2008-09-03

    Theories of embodied cognition propose that recognizing facial expressions requires visual processing followed by simulation of the somatovisceral responses associated with the perceived expression. To test this proposal, we targeted the right occipital face area (rOFA) and the face region of right somatosensory cortex (rSC) with repetitive transcranial magnetic stimulation (rTMS) while participants discriminated facial expressions. rTMS selectively impaired discrimination of facial expressions at both sites but had no effect on a matched face identity task. Site specificity within the rSC was demonstrated by targeting rTMS at the face and finger regions while participants performed the expression discrimination task. rTMS targeted at the face region impaired task performance relative to rTMS targeted at the finger region. To establish the temporal course of visual and somatosensory contributions to expression processing, double-pulse TMS was delivered at different times to rOFA and rSC during expression discrimination. Accuracy dropped when pulses were delivered at 60-100 ms at rOFA and at 100-140 and 130-170 ms at rSC. These sequential impairments at rOFA and rSC support embodied accounts of expression recognition as well as hierarchical models of face processing. The results also demonstrate that nonvisual cortical areas contribute during early stages of expression processing.

  4. Reduced expression of regeneration associated genes in chronically axotomized facial motoneurons.

    PubMed

    Gordon, T; You, S; Cassar, S L; Tetzlaff, W

    2015-02-01

    Chronically axotomized motoneurons progressively fail to regenerate their axons. Since axonal regeneration is associated with the increased expression of tubulin, actin and GAP-43, we examined whether the regenerative failure is due to failure of chronically axotomized motoneurons to express and sustain the expression of these regeneration associated genes (RAGs). Chronically axotomized facial motoneurons were subjected to a second axotomy to mimic the clinical surgical procedure of refreshing the proximal nerve stump prior to nerve repair. Expression of α1-tubulin, actin and GAP-43 was analyzed in axotomized motoneurons using in situ hybridization followed by autoradiography and silver grain quantification. The expression of these RAGs by acutely axotomized motoneurons declined over several months. The chronically injured motoneurons responded to a refreshment axotomy with a re-increase in RAG expression. However, this response to a refreshment axotomy of chronically injured facial motoneurons was less than that seen in acutely axotomized facial motoneurons. These data demonstrate that the neuronal RAG expression can be induced by injury-related signals and does not require acute deprivation of target derived factors. The transient expression is consistent with a transient inflammatory response to the injury. We conclude that transient RAG expression in chronically axotomized motoneurons and the weak response of the chronically axotomized motoneurons to a refreshment axotomy provides a plausible explanation for the progressive decline in regenerative capacity of chronically axotomized motoneurons.

  5. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  6. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  7. Facial Expression of Affect in Children with Cornelia de Lange Syndrome

    ERIC Educational Resources Information Center

    Collis, L.; Moss, J.; Jutley, J.; Cornish, K.; Oliver, C.

    2008-01-01

    Background: Individuals with Cornelia de Lange syndrome (CdLS) have been reported to show comparatively high levels of flat and negative affect but there have been no empirical evaluations. In this study, we use an objective measure of facial expression to compare affect in CdLS with that seen in Cri du Chat syndrome (CDC) and a group of…

  8. Improving Accuracy of Decoding Emotions from Facial Expressions by Cooperative Learning Techniques, Two Experimental Studies.

    ERIC Educational Resources Information Center

    Klinzing, Hans Gerhard

    A program was developed for the improvement of social competence in general among professionals with the improvement of the accuracy of decoding emotions from facial expressions as the specific focus. It was integrated as a laboratory experience into traditional lectures at two German universities where studies were conducted to assess the…

  9. Inferring Attributes of a Situation from the Facial Expressions of Peers.

    ERIC Educational Resources Information Center

    Abramovitch, Rona; Daly, Eleanor M.

    1979-01-01

    Assesses the ability of four-year-old children to judge certain social situations from the facial expressions of peers. The children were presented with soundless videotapes of the face and upper torso of classmates and unknown peers interacting with peers and adults who were strange or familiar. (JMB)

  10. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  11. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  12. Social communication in siamangs (Symphalangus syndactylus): use of gestures and facial expressions.

    PubMed

    Liebal, Katja; Pika, Simone; Tomasello, Michael

    2004-01-01

    The current study represents the first systematic investigation of the social communication of captive siamangs (Symphalangus syndactylus). The focus was on intentional signals, including tactile and visual gestures, as well as facial expressions and actions. Fourteen individuals from different groups were observed and the signals used by individuals were recorded. Thirty-one different signals, consisting of 12 tactile gestures, 8 visual gestures, 7 actions, and 4 facial expressions, were observed, with tactile gestures and facial expressions appearing most frequently. The range of the signal repertoire increased steadily until the age of six, but declined afterwards in adults. The proportions of the different signal categories used within communicative interactions, in particular actions and facial expressions, also varied depending on age. Group differences could be traced back mainly to social factors or housing conditions. Differences in the repertoire of males and females were most obvious in the sexual context. Overall, most signals were used flexibly, with the majority performed in three or more social contexts and almost one-third of signals used in combination with other signals. Siamangs also adjusted their signals appropriately for the recipient, for example, using visual signals most often when the recipient was already attending (audience effects). These observations are discussed in the context of siamang ecology, social structure, and cognition.

  13. Assessment of Learners' Attention to E-Learning by Monitoring Facial Expressions for Computer Network Courses

    ERIC Educational Resources Information Center

    Chen, Hong-Ren

    2012-01-01

    Recognition of students' facial expressions can be used to understand their level of attention. In a traditional classroom setting, teachers guide the classes and continuously monitor and engage the students to evaluate their understanding and progress. Given the current popularity of e-learning environments, it has become important to assess the…

  14. Cradling Side Preference Is Associated with Lateralized Processing of Baby Facial Expressions in Females

    ERIC Educational Resources Information Center

    Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut

    2009-01-01

    Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…

  15. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    ERIC Educational Resources Information Center

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  16. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  17. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  18. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    ERIC Educational Resources Information Center

    Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.

    2012-01-01

    Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…

  19. The Effects of Early Institutionalization on the Discrimination of Facial Expressions of Emotion in Young Children

    ERIC Educational Resources Information Center

    Jeon, Hana; Moulson, Margaret C.; Fox, Nathan; Zeanah, Charles; Nelson, Charles A., III

    2010-01-01

    The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care-as-Usual (institutional care) following a baseline assessment. Another group consisted of children…

  20. Effects of Context and Facial Expression on Imitation Tasks in Preschool Children with Autism

    ERIC Educational Resources Information Center

    Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George

    2013-01-01

    The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with autism…

  1. Automated Measurement of Facial Expression in Infant-Mother Interaction: A Pilot Study

    ERIC Educational Resources Information Center

    Messinger, Daniel S.; Mahoor, Mohammad H.; Chow, Sy-Miin; Cohn, Jeffrey F.

    2009-01-01

    Automated facial measurement using computer vision has the potential to objectively document continuous changes in behavior. To examine emotional expression and communication, we used automated measurements to quantify smile strength, eye constriction, and mouth opening in two 6-month-old infant-mother dyads who each engaged in a face-to-face…

  2. Concealing of facial expressions by a wild Barbary macaque (Macaca sylvanus).

    PubMed

    Thunström, Maria; Kuchenbuch, Paul; Young, Christopher

    2014-07-01

    Behavioural research on non-vocal communication among non-human primates and its possible links to the origin of human language is a long-standing research topic. Because human language is under voluntary control, it is of interest whether this is also true for any communicative signals of other species. It has been argued that the behaviour of hiding a facial expression with one's hand supports the idea that gestures might be under more voluntary control than facial expressions among non-human primates, and it has also been interpreted as a sign of intentionality. So far, the behaviour has only been reported twice, for single gorilla and chimpanzee individuals, both in captivity. Here, we report the first observation of concealing of facial expressions by a monkey, a Barbary macaque (Macaca sylvanus), living in the wild. On eight separate occasions between 2009 and 2011 an adult male was filmed concealing two different facial expressions associated with play and aggression ("play face" and "scream face"), 22 times in total. The videos were analysed in detail, including gaze direction, hand usage, duration, and individuals present. This male was the only individual in his group to manifest this behaviour, which always occurred in the presence of a dominant male. Several possible interpretations of the function of the behaviour are discussed. The observations in this study indicate that the gestural communication and cognitive abilities of monkeys warrant more research attention.

  3. Interpretation of Facial Expressions of Affect in Children with Learning Disabilities with Verbal or Nonverbal Deficits.

    ERIC Educational Resources Information Center

    Dimitrovsky, Lilly; Spector, Hedva; Levy-Shiff, Rachel; Vakil, Eli

    1998-01-01

    The ability to identify six facial expressions was studied in 48 nondisabled children and 76 children with learning disabilities (LD) ages 9 through 12. Overall, the nondisabled group had better interpretive ability. Among LD children, those with verbal deficits had better ability than either those with nonverbal deficits and or those with both…

  4. Infants' Intermodal Perception of Canine ("Canis Familairis") Facial Expressions and Vocalizations

    ERIC Educational Resources Information Center

    Flom, Ross; Whipple, Heather; Hyde, Daniel

    2009-01-01

    From birth, human infants are able to perceive a wide range of intersensory relationships. The current experiment examined whether infants between 6 months and 24 months old perceive the intermodal relationship between aggressive and nonaggressive canine vocalizations (i.e., barks) and appropriate canine facial expressions. Infants simultaneously…

  5. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    ERIC Educational Resources Information Center

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  6. Processing of Facial Expressions of Emotions by Adults with Down Syndrome and Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…

  7. Hemodynamic response of children with attention-deficit and hyperactive disorder (ADHD) to emotional facial expressions.

    PubMed

    Ichikawa, Hiroko; Nakato, Emi; Kanazawa, So; Shimamura, Keiichi; Sakuta, Yuiko; Sakuta, Ryoichi; Yamaguchi, Masami K; Kakigi, Ryusuke

    2014-10-01

    Children with attention-deficit/hyperactivity disorder (ADHD) have difficulty recognizing facial expressions. They identify angry expressions less accurately than typically developing (TD) children, yet little is known about their atypical neural basis for the recognition of facial expressions. Here, we used near-infrared spectroscopy (NIRS) to examine the distinctive cerebral hemodynamics of ADHD and TD children while they viewed happy and angry expressions. We measured the hemodynamic responses of 13 ADHD boys and 13 TD boys to happy and angry expressions at their bilateral temporal areas, which are sensitive to face processing. The ADHD children showed an increased concentration of oxy-Hb for happy faces but not for angry faces, while TD children showed increased oxy-Hb for both faces. Moreover, the individual peak latency of hemodynamic response in the right temporal area showed significantly greater variance in the ADHD group than in the TD group. Such atypical brain activity observed in ADHD boys may relate to their preserved ability to recognize a happy expression and their difficulty recognizing an angry expression. We firstly demonstrated that NIRS can be used to detect atypical hemodynamic response to facial expressions in ADHD children.

  8. Neural evidence for cultural differences in the valuation of positive facial expressions

    PubMed Central

    Park, BoKyung; Chim, Louise; Blevins, Elizabeth; Knutson, Brian

    2016-01-01

    European Americans value excitement more and calm less than Chinese. Within cultures, European Americans value excited and calm states similarly, whereas Chinese value calm more than excited states. To examine how these cultural differences influence people’s immediate responses to excited vs calm facial expressions, we combined a facial rating task with functional magnetic resonance imaging. During scanning, European American (n = 19) and Chinese (n = 19) females viewed and rated faces that varied by expression (excited, calm), ethnicity (White, Asian) and gender (male, female). As predicted, European Americans showed greater activity in circuits associated with affect and reward (bilateral ventral striatum, left caudate) while viewing excited vs calm expressions than did Chinese. Within cultures, European Americans responded to excited vs calm expressions similarly, whereas Chinese showed greater activity in these circuits in response to calm vs excited expressions regardless of targets’ ethnicity or gender. Across cultural groups, greater ventral striatal activity while viewing excited vs. calm expressions predicted greater preference for excited vs calm expressions months later. These findings provide neural evidence that people find viewing the specific positive facial expressions valued by their cultures to be rewarding and relevant. PMID:26342220

  9. Molecular weight specific impact of soluble and immobilized hyaluronan on CD44 expressing melanoma cells in 3D collagen matrices.

    PubMed

    Sapudom, Jiranuwat; Ullm, Franziska; Martin, Steve; Kalbitzer, Liv; Naab, Johanna; Möller, Stephanie; Schnabelrauch, Matthias; Anderegg, Ulf; Schmidt, Stephan; Pompe, Tilo

    2017-03-01

    Hyaluronan (HA) and its principal receptor CD44 are known to be involved in regulating tumor cell dissemination and metastasis. The direct correlation of CD44-HA interaction on proliferation and invasion of tumor cells in dependence on the molecular weight and the presentation form of HA is not fully understood because of lack of appropriate matrix models. To address this issue, we reconstituted 3D collagen (Coll I) matrices and functionalized them with HA of molecular weight of 30-50kDa (low molecular weight; LMW-HA) and 500-750kDa (high molecular weight; HMW-HA). A post-modification strategy was applied to covalently immobilize HA to reconstituted fibrillar Coll I matrices, resulting in a non-altered Coll I network microstructure and stable immobilization over days. Functionalized Coll I matrices were characterized regarding topological and mechanical characteristics as well as HA amount using confocal laser scanning microscopy, colloidal probe force spectroscopy and quantitative Alcian blue assay, respectively. To elucidate HA dependent tumor cell behavior, BRO melanoma cell lines with and without CD44 receptor expression were used for in vitro cell experiments. We demonstrated that only soluble LMW-HA promoted cell proliferation in a CD44 dependent manner, while HMW-HA and immobilized LMW-HA did not. Furthermore, an enhanced cell invasion was found only for immobilized LMW-HA. Both findings correlated with a very strong and specific adhesive interaction of LMW-HA and CD44+ cells quantified in single cell adhesion measurements using soft colloidal force spectroscopy. Overall, our results introduce an in vitro biomaterials model allowing to test presentation mode and molecular weight specificity of HA in a 3D fibrillar matrix thus mimicking important in vivo features of tumor microenvironments.

  10. Cloning, 3D modeling and expression analysis of three vacuolar invertase genes from cassava (Manihot Esculenta Crantz).

    PubMed

    Yao, Yuan; Wu, Xiao-Hui; Geng, Meng-Ting; Li, Rui-Mei; Liu, Jiao; Hu, Xin-Wen; Guo, Jian-Chun

    2014-05-15

    Vacuolar invertase is one of the key enzymes in sucrose metabolism that irreversibly catalyzes the hydrolysis of sucrose to glucose and fructose in plants. In this research, three vacuolar invertase genes, named MeVINV1-3, and with 653, 660 and 639 amino acids, respectively, were cloned from cassava. The motifs of NDPNG (β-fructosidase motif), RDP and WECVD, which are conserved and essential for catalytic activity in the vacuolar invertase family, were found in MeVINV1 and MeVINV2. Meanwhile, in MeVINV3, instead of NDPNG we found the motif NGPDG, in which the three amino acids GPD are different from those in other vacuolar invertases (DPN) that might result in MeVINV3 being an inactivated protein. The N-terminal leader sequence of MeVINVs contains a signal anchor, which is associated with the sorting of vacuolar invertase to vacuole. The overall predicted 3D structure of the MeVINVs consists of a five bladed β-propeller module at N-terminus domain, and forms a β-sandwich module at the C-terminus domain. The active site of the protein is situated in the β-propeller module. MeVINVs are classified in two subfamilies, α and β groups, in which α group members of MeVINV1 and 2 are highly expressed in reproductive organs and tuber roots (considered as sink organs), while β group members of MeVINV3 are highly expressed in leaves (source organs). All MeVINVs are highly expressed in leaves, while only MeVINV1 and 2 are highly expressed in tubers at cassava tuber maturity stage. Thus, MeVINV1 and 2 play an important role in sucrose unloading and starch accumulation, as well in buffering the pools of sucrose, hexoses and sugar phosphates in leaves, specifically at later stages of plant development.

  11. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study

    PubMed Central

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information. PMID:26375031

  12. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    PubMed

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  13. Damage to association fiber tracts impairs recognition of the facial expression of emotion.

    PubMed

    Philippi, Carissa L; Mehta, Sonya; Grabowski, Thomas; Adolphs, Ralph; Rudrauf, David

    2009-12-02

    An array of cortical and subcortical structures have been implicated in the recognition of emotion from facial expressions. It remains unknown how these regions communicate as parts of a system to achieve recognition, but white matter tracts are likely critical to this process. We hypothesized that (1) damage to white matter tracts would be associated with recognition impairment and (2) the degree of disconnection of association fiber tracts [inferior longitudinal fasciculus (ILF) and/or inferior fronto-occipital fasciculus (IFOF)] connecting the visual cortex with emotion-related regions would negatively correlate with recognition performance. One hundred three patients with focal, stable brain lesions mapped onto a reference brain were tested on their recognition of six basic emotional facial expressions. Association fiber tracts from a probabilistic atlas were coregistered to the reference brain. Parameters estimating disconnection were entered in a general linear model to predict emotion recognition impairments, accounting for lesion size and cortical damage. Damage associated with the right IFOF significantly predicted an overall facial emotion recognition impairment and specific impairments for sadness, anger, and fear. One subject had a pure white matter lesion in the location of the right IFOF and ILF. He presented specific, unequivocal emotion recognition impairments. Additional analysis suggested that impairment in fear recognition can result from damage to the IFOF and not the amygdala. Our findings demonstrate the key role of white matter association tracts in the recognition of the facial expression of emotion and identify specific tracts that may be most critical.

  14. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  15. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  16. Evolution of Ciona intestinalis Tumor necrosis factor alpha (CiTNFα): Polymorphism, tissues expression, and 3D modeling.

    PubMed

    Vizzini, Aiti; Giovanna, Parisi Maria; Cardinale, Laura; Testasecca, Lelia; Cammarata, Matteo

    2017-02-01

    Although the Tumor necrosis factor gene superfamily seems to be very conserved in vertebrates, phylogeny, tissue expression, genomic and gene organization, protein domains and polymorphism analyses showed that a strong change has happened mostly in invertebrates in which protochordates were a constraint during the immune-molecules history and evolution. RT PCR was used to investigate differential gene expression in different tissues. The expression shown was greater in the pharynx. Single-nucleotide polymorphism has been investigated in Ciona intestinalis Tumor necrosis factor alpha (CiTNFα) mRNA isolated from the pharynx of 30 ascidians collected from Licata, Sicily (Italy), by denaturing gradient gel electrophoresis (DGGE). For this analysis, CiTNFα nucleotide sequence was separated into two fragments, TNF-1 and -2, respectively, of 630 and 540 bp. We defined 23 individual DGGE patterns (named 1 to 10 for TNF-1 and 1 to 13 for TNF-2). Five patterns for TNF-1 accounted for <10% of the individuals, whereas the pattern 13 of TNF-2 accounted for >20% of the individuals. All the patterns were verified by direct sequencing. Single base-pair mutations were observed mainly within COOH-terminus, leading to 30 nucleotide sequence variants and 30 different coding sequences segregating in two main different clusters. Although most of the base mutations were silent, four propeptide variants were detected and six amino acid replacements occurred within COOH-terminus. Statistical tests for neutrality indicated negative selection pressure on signal and mature peptide domains, but possible positive selection pressure on COOH-terminus domain. Lastly we displayed the in silico 3D structure analysis including the CiTNFα variable region.

  17. Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience.

    PubMed

    Schrammel, Franziska; Pannasch, Sebastian; Graupner, Sven-Thomas; Mojzisch, Andreas; Velichkovsky, Boris M

    2009-09-01

    The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.

  18. Suboptimal Exposure to Facial Expressions When Viewing Video Messages From a Small Screen: Effects on Emotion, Attention, and Memory

    ERIC Educational Resources Information Center

    Ravaja, Niklas; Kallinen, Kari; Saari, Timo; Keltikangas-Jarvinen, Liisa

    2004-01-01

    The authors examined the effects of suboptimally presented facial expressions on emotional and attentional responses and memory among 39 young adults viewing video (business news) messages from a small screen. Facial electromyography (EMG) and respiratory sinus arrhythmia were used as physiological measures of emotion and attention, respectively.…

  19. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    ERIC Educational Resources Information Center

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  20. That "poker face" just might lose you the game! The impact of expressive suppression and mimicry on sensitivity to facial expressions of emotion.

    PubMed

    Schneider, Kristin G; Hempel, Roelie J; Lynch, Thomas R

    2013-10-01

    Successful interpersonal functioning often requires both the ability to mask inner feelings and the ability to accurately recognize others' expressions--but what if effortful control of emotional expressions impacts the ability to accurately read others? In this study, we examined the influence of self-controlled expressive suppression and mimicry on facial affect sensitivity--the speed with which one can accurately identify gradually intensifying facial expressions of emotion. Muscle activity of the brow (corrugator, related to anger), upper lip (levator, related to disgust), and cheek (zygomaticus, related to happiness) were recorded using facial electromyography while participants randomized to one of three conditions (Suppress, Mimic, and No-Instruction) viewed a series of six distinct emotional expressions (happiness, sadness, fear, anger, surprise, and disgust) as they morphed from neutral to full expression. As hypothesized, individuals instructed to suppress their own facial expressions showed impairment in facial affect sensitivity. Conversely, mimicry of emotion expressions appeared to facilitate facial affect sensitivity. Results suggest that it is difficult for a person to be able to simultaneously mask inner feelings and accurately "read" the facial expressions of others, at least when these expressions are at low intensity. The combined behavioral and physiological data suggest that the strategies an individual selects to control his or her own expression of emotion have important implications for interpersonal functioning.

  1. An Assessment of How Facial Mimicry Can Change Facial Morphology: Implications for Identification.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    The assessment of facial mimicry is important in forensic anthropology; in addition, the application of modern 3D image acquisition systems may help for the analysis of facial surfaces. This study aimed at exposing a novel method for comparing 3D profiles in different facial expressions. Ten male adults, aged between 30 and 40 years, underwent acquisitions by stereophotogrammetry (VECTRA-3D(®) ) with different expressions (neutral, happy, sad, angry, surprised). The acquisition of each individual was then superimposed on the neutral one according to nine landmarks, and the root mean square (RMS) value between the two expressions was calculated. The highest difference in comparison with the neutral standard was shown by the happy expression (RMS 4.11 mm), followed by the surprised (RMS 2.74 mm), sad (RMS 1.3 mm), and angry ones (RMS 1.21 mm). This pilot study shows that the 3D-3D superimposition may provide reliable results concerning facial alteration due to mimicry.

  2. Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification.

    PubMed

    Rodriguez, Pau; Cucurull, Guillem; Gonalez, Jordi; Gonfaus, Josep M; Nasrollahi, Kamal; Moeslund, Thomas B; Roca, F Xavier

    2017-02-09

    Pain is an unpleasant feeling that has been shown to be an important factor for the recovery of patients. Since this is costly in human resources and difficult to do objectively, there is the need for automatic systems to measure it. In this paper, contrary to current state-of-the-art techniques in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first uses convolutional neural networks (CNNs) to learn facial features from VGG_Faces, which are then linked to a long short-term memory to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized appearance versus taking into account the whole image. As a result, we outperform current state-of-the-art area under the curve performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also report competitive results in the Cohn Kanade+ facial expression database.

  3. Spheroid arrays for high-throughput single-cell analysis of spatial patterns and biomarker expression in 3D

    PubMed Central

    Ivanov, Delyan P.; Grabowska, Anna M.

    2017-01-01

    We describe and share a device, methodology and image analysis algorithms, which allow up to 66 spheroids to be arranged into a gel-based array directly from a culture plate for downstream processing and analysis. Compared to processing individual samples, the technique uses 11-fold less reagents, saves time and enables automated imaging. To illustrate the power of the technology, we showcase applications of the methodology for investigating 3D spheroid morphology and marker expression and for in vitro safety and efficacy screens. First, spheroid arrays of 11 cell-lines were rapidly assessed for differences in spheroid morphology. Second, highly-positive (SOX-2), moderately-positive (Ki-67) and weakly-positive (βIII-tubulin) protein targets were detected and quantified. Third, the arrays enabled screening of ten media compositions for inducing differentiation in human neurospheres. Last, the application of spheroid microarrays for spheroid-based drug screens was demonstrated by quantifying the dose-dependent drop in proliferation and increase in differentiation in etoposide-treated neurospheres. PMID:28134245

  4. A facial expression image database and norm for Asian population: a preliminary report

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  5. Facial expression, size, and clutter: Inferences from movie structure to emotion judgments and back.

    PubMed

    Cutting, James E; Armstrong, Kacie L

    2016-04-01

    The perception of facial expressions and objects at a distance are entrenched psychological research venues, but their intersection is not. We were motivated to study them together because of their joint importance in the physical composition of popular movies-shots that show a larger image of a face typically have shorter durations than those in which the face is smaller. For static images, we explore the time it takes viewers to categorize the valence of different facial expressions as a function of their visual size. In two studies, we find that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter. More clutter creates crowding and impedes the interpretation of expressions for more distant faces but not proximal ones. Filmmakers at least tacitly know this. In two other studies, we show that contemporary movies lengthen shots that show smaller faces, and even more so with increased clutter.

  6. Functionally relevant responses to human facial expressions of emotion in the domestic horse (Equus caballus).

    PubMed

    Smith, Amy Victoria; Proops, Leanne; Grounds, Kate; Wathan, Jennifer; McComb, Karen

    2016-02-01

    Whether non-human animals can recognize human signals, including emotions, has both scientific and applied importance, and is particularly relevant for domesticated species. This study presents the first evidence of horses' abilities to spontaneously discriminate between positive (happy) and negative (angry) human facial expressions in photographs. Our results showed that the angry faces induced responses indicative of a functional understanding of the stimuli: horses displayed a left-gaze bias (a lateralization generally associated with stimuli perceived as negative) and a quicker increase in heart rate (HR) towards these photographs. Such lateralized responses towards human emotion have previously only been documented in dogs, and effects of facial expressions on HR have not been shown in any heterospecific studies. Alongside the insights that these findings provide into interspecific communication, they raise interesting questions about the generality and adaptiveness of emotional expression and perception across species.

  7. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    PubMed

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  8. Effect of topical tretinoin, chemical peeling and dermabrasion on p53 expression in facial skin.

    PubMed

    El-Domyati, Moetaz M; Attia, Sameh K; Saleh, Fatma Y; Ahmad, Hesham M; Gasparro, Frances P; Uitto, Jouni J

    2003-01-01

    The tumour suppressor protein p53 is a phosphoprotein that is activated by DNA damage. It is involved in the decision whether the cells should stop replication and proceed to repair their DNA, or to die by apoptosis. In the present study, we evaluate the effect of some treatment modalities on the expression of p53 in facial skin. Biopsy specimens were obtained from the facial skin of 20 patients before and after treatment using topical tretinoin (11 cases), TCA chemical peeling (5 cases) and dermabrasion (4 cases). Biopsy specimens were also obtained from 12 control subjects representing the same age groups of the patients. Topical tretinoin therapy was found to induce a significant decrease in the expression of p53 up to 6 months of therapy followed by a significant increase after 10 months of therapy. On the contrary, superficial TCA peeling did not induce any statistically significant change in the expression of p53. On the other hand dermabrasion was found to induce a significant decrease in the level of expression of p53 in biopsies obtained after complete re-epithelialization followed by a significant increase. These changes in the expression of p53 may play a role in mediating the effects of such treatment modalities on the epidermis, as well as prevention of actinic neoplasia by adjusting any disturbance in the proliferation/apoptosis balance observed in photoaged facial skin.

  9. Neural correlates of the perception of dynamic versus static facial expressions of emotion

    PubMed Central

    Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C.; Abler, Birgit

    2011-01-01

    Aim: This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. Methods: A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Results: Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Conclusions: Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli. PMID:21522486

  10. Facial expression recognition in peripheral versus central vision: role of the eyes and the mouth.

    PubMed

    Calvo, Manuel G; Fernández-Martín, Andrés; Nummenmaa, Lauri

    2014-03-01

    This study investigated facial expression recognition in peripheral relative to central vision, and the factors accounting for the recognition advantage of some expressions in the visual periphery. Whole faces or only the eyes or the mouth regions were presented for 150 ms, either at fixation or extrafoveally (2.5° or 6°), followed by a backward mask and a probe word. Results indicated that (a) all the basic expressions were recognized above chance level, although performance in peripheral vision was less impaired for happy than for non-happy expressions, (b) the happy face advantage remained when only the mouth region was presented, and (c) the smiling mouth was the most visually salient and most distinctive facial feature of all expressions. This suggests that the saliency and the diagnostic value of the smile account for the advantage in happy face recognition in peripheral vision. Because of saliency, the smiling mouth accrues sensory gain and becomes resistant to visual degradation due to stimulus eccentricity, thus remaining accessible extrafoveally. Because of diagnostic value, the smile provides a distinctive single cue of facial happiness, thus bypassing integration of face parts and reducing susceptibility to breakdown of configural processing in peripheral vision.

  11. Spontaneous facial expression in unscripted social interactions can be measured automatically

    PubMed Central

    Girard, Jeffrey M.; Cohn, Jeffrey F.; Jeni, Laszlo A.; Sayette, Michael A.; De la Torre, Fernando

    2014-01-01

    Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study. PMID:25488104

  12. Discrimination and Imitation of Facial Expressions by Neonates.

    ERIC Educational Resources Information Center

    Field, Tiffany

    Findings of a series of studies on individual differences and maturational changes in expressivity at the neonatal stage and during early infancy are reported. Research results indicate that newborns are able to discriminate and imitate the basic emotional expressions: happy, sad, and surprised. Results show widened infant lips when the happy…

  13. Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri

    2012-01-01

    Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…

  14. Detection of Nausea-Like Response in Rats by Monitoring Facial Expression

    PubMed Central

    Yamamoto, Kouichi; Tatsutani, Soichi; Ishida, Takayuki

    2017-01-01

    Patients receiving cancer chemotherapy experience nausea and vomiting. They are not life-threatening symptoms, but their insufficient control reduces the patients’ quality of life. To identify methods for the management of nausea and vomiting in preclinical studies, the objective evaluation of these symptoms in laboratory animals is required. Unlike vomiting, nausea is defined as a subjective feeling described as recognition of the need to vomit; thus, determination of the severity of nausea in laboratory animals is considered to be difficult. However, since we observed that rats grimace after the administration of cisplatin, we hypothesized that changes in facial expression can be used as a method to detect nausea. In this study, we monitored the changes in the facial expression of rats after the administration of cisplatin and investigated the effect of anti-emetic drugs on the prevention of cisplatin-induced changes in facial expression. Rats were housed in individual cages with free access to food and tap water, and their facial expressions were continuously recorded by infrared video camera. On the day of the experiment, rats received cisplatin (0, 3, and 6 mg/kg, i.p.) with or without a daily injection of a 5-HT3 receptor antagonist (granisetron: 0.1 mg/kg, i.p.) or a neurokinin NK1 receptor antagonist (fosaprepitant: 2 mg/kg, i.p.), and their eye-opening index (the ratio between longitudinal and axial lengths of the eye) in the recorded video image was calculated. Cisplatin significantly and dose-dependently induced a decrease of the eye-opening index 6 h after the cisplatin injection, and the decrease continued for 2 days. The acute phase (day 1), but not the delayed phase (day 2), of the decreased eye-opening index was inhibited by treatment with granisetron; however, fosaprepitant abolished both phases of changes. The time-course of changes in facial expression are similar to clinical evidence of cisplatin-induced nausea in humans. These findings

  15. Facial expressions, their communicatory functions and neuro-cognitive substrates.

    PubMed Central

    Blair, R J R

    2003-01-01

    Human emotional expressions serve a crucial communicatory role allowing the rapid transmission of valence information from one individual to another. This paper will review the literature on the neural mechanisms necessary for this communication: both the mechanisms involved in the production of emotional expressions and those involved in the interpretation of the emotional expressions of others. Finally, reference to the neuro-psychiatric disorders of autism, psychopathy and acquired sociopathy will be made. In these conditions, the appropriate processing of emotional expressions is impaired. In autism, it is argued that the basic response to emotional expressions remains intact but that there is impaired ability to represent the referent of the individual displaying the emotion. In psychopathy, the response to fearful and sad expressions is attenuated and this interferes with socialization resulting in an individual who fails to learn to avoid actions that result in harm to others. In acquired sociopathy, the response to angry expressions in particular is attenuated resulting in reduced regulation of social behaviour. PMID:12689381

  16. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    PubMed

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD.

  17. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    ERIC Educational Resources Information Center

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  18. Red enhances the processing of facial expressions of anger.

    PubMed

    Young, Steven G; Elliot, Andrew J; Feltman, Roger; Ambady, Nalini

    2013-06-01

    Emotional expressions convey important social information. Given the social importance of decoding emotions, expressive faces wield great influence on cognition and perception. However, contextual factors also exert a top-down influence on emotion detection, privileging particular expressions over others. The current research investigates how the psychological meaning implied by the color red biases the processing of anger expressions. Red has been shown to carry the meaning of threat and danger, and in two experiments we find that exposure to red enhances the perception and identification of anger. In Experiment 1, the identification of anger, relative to happiness, was facilitated when faces were viewed on a red background. In Experiment 2, the red-anger facilitation effect was replicated and shown to not generalize to another high arousal negative emotion, fear. These results document a novel influence of color on emotion detection processes.

  19. Alexithymia, not autism, predicts poor recognition of emotional facial expressions.

    PubMed

    Cook, Richard; Brewer, Rebecca; Shah, Punit; Bird, Geoffrey

    2013-05-01

    Despite considerable research into whether face perception is impaired in autistic individuals, clear answers have proved elusive. In the present study, we sought to determine whether co-occurring alexithymia (characterized by difficulties interpreting emotional states) may be responsible for face-perception deficits previously attributed to autism. Two experiments were conducted using psychophysical procedures to determine the relative contributions of alexithymia and autism to identity and expression recognition. Experiment 1 showed that alexithymia correlates strongly with the precision of expression attributions, whereas autism severity was unrelated to expression-recognition ability. Experiment 2 confirmed that alexithymia is not associated with impaired ability to detect expression variation; instead, results suggested that alexithymia is associated with difficulties interpreting intact sensory descriptions. Neither alexithymia nor autism was associated with biased or imprecise identity attributions. These findings accord with the hypothesis that the emotional symptoms of autism are in fact due to co-occurring alexithymia and that existing diagnostic criteria may need to be revised.

  20. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2004-12-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  1. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  2. Influence of Matrices on 3D-Cultured Prostate Cancer Cells' Drug Response and Expression of Drug-Action Associated Proteins.

    PubMed

    Edmondson, Rasheena; Adcock, Audrey F; Yang, Liju

    2016-01-01

    This study investigated the effects of matrix on the behaviors of 3D-cultured cells of two prostate cancer cell lines, LNCaP and DU145. Two biologically-derived matrices, Matrigel and Cultrex BME, and one synthetic matrix, the Alvetex scaffold, were used to culture the cells. The cell proliferation rate, cellular response to anti-cancer drugs, and expression levels of proteins associated with drug sensitivity/resistance were examined and compared amongst the 3D-cultured cells on the three matrices and 2D-cultured cells. The cellular responses upon treatment with two common anti-cancer drugs, Docetaxel and Rapamycin, were examined. The expressions of epidermal growth factor receptor (EGFR) and β-III tubulin in DU145 cells and p53 in LNCaP cells were examined. The results showed that the proliferation rates of cells cultured on the three matrices varied, especially between the synthetic matrix and the biologically-derived matrices. The drug responses and the expressions of drug sensitivity-associated proteins differed between cells on various matrices as well. Among the 3D cultures on the three matrices, increased expression of β-III tubulin in DU145 cells was correlated with increased resistance to Docetaxel, and decreased expression of EGFR in DU145 cells was correlated with increased sensitivity to Rapamycin. Increased expression of a p53 dimer in 3D-cultured LNCaP cells was correlated with increased resistance to Docetaxel. Collectively, the results showed that the matrix of 3D cell culture models strongly influences cellular behaviors, which highlights the imperative need to achieve standardization of 3D cell culture technology in order to be used in drug screening and cell biology studies.

  3. Influence of Matrices on 3D-Cultured Prostate Cancer Cells' Drug Response and Expression of Drug-Action Associated Proteins

    PubMed Central

    Edmondson, Rasheena; Adcock, Audrey F.; Yang, Liju

    2016-01-01

    This study investigated the effects of matrix on the behaviors of 3D-cultured cells of two prostate cancer cell lines, LNCaP and DU145. Two biologically-derived matrices, Matrigel and Cultrex BME, and one synthetic matrix, the Alvetex scaffold, were used to culture the cells. The cell proliferation rate, cellular response to anti-cancer drugs, and expression levels of proteins associated with drug sensitivity/resistance were examined and compared amongst the 3D-cultured cells on the three matrices and 2D-cultured cells. The cellular responses upon treatment with two common anti-cancer drugs, Docetaxel and Rapamycin, were examined. The expressions of epidermal growth factor receptor (EGFR) and β-III tubulin in DU145 cells and p53 in LNCaP cells were examined. The results showed that the proliferation rates of cells cultured on the three matrices varied, especially between the synthetic matrix and the biologically-derived matrices. The drug responses and the expressions of drug sensitivity-associated proteins differed between cells on various matrices as well. Among the 3D cultures on the three matrices, increased expression of β-III tubulin in DU145 cells was correlated with increased resistance to Docetaxel, and decreased expression of EGFR in DU145 cells was correlated with increased sensitivity to Rapamycin. Increased expression of a p53 dimer in 3D-cultured LNCaP cells was correlated with increased resistance to Docetaxel. Collectively, the results showed that the matrix of 3D cell culture models strongly influences cellular behaviors, which highlights the imperative need to achieve standardization of 3D cell culture technology in order to be used in drug screening and cell biology studies. PMID:27352049

  4. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  5. Feature-based representations of emotional facial expressions in the human amygdala.

    PubMed

    Ahs, Fredrik; Davis, Caroline F; Gorka, Adam X; Hariri, Ahmad R

    2014-09-01

    The amygdala plays a central role in processing facial affect, responding to diverse expressions and features shared between expressions. Although speculation exists regarding the nature of relationships between expression- and feature-specific amygdala reactivity, this matter has not been fully explored. We used functional magnetic resonance imaging and principal component analysis (PCA) in a sample of 300 young adults, to investigate patterns related to expression- and feature-specific amygdala reactivity to faces displaying neutral, fearful, angry or surprised expressions. The PCA revealed a two-dimensional correlation structure that distinguished emotional categories. The first principal component separated neutral and surprised from fearful and angry expressions, whereas the second principal component separated neutral and angry from fearful and surprised expressions. This two-dimensional correlation structure of amygdala reactivity may represent specific feature-based cues conserved across discrete expressions. To delineate which feature-based cues characterized this pattern, face stimuli were averaged and then subtracted according to their principal component loadings. The first principal component corresponded to displacement of the eyebrows, whereas the second principal component corresponded to increased exposure of eye whites together with movement of the brow. Our results suggest a convergent representation of facial affect in the amygdala reflecting feature-based processing of discrete expressions.

  6. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    PubMed Central

    Ghimire, Deepak; Lee, Joonwhoan

    2013-01-01

    Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+) facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively. PMID:23771158

  7. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  8. Perceptions of emotion from facial expressions are not culturally universal: evidence from a remote culture.

    PubMed

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-04-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to U.S. emotion concepts embedded in experiments. Because such cues are standard features in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the United States and the Himba ethnic group from the Keunene region of northwestern Namibia sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed "universal" pattern, whereas U.S. participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed "universal" pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts.

  9. Children's ability to recognize emotions from partial and complete facial expressions.

    PubMed

    Gagnon, Mathieu; Gosselin, Pierre; Maassarani, Reem

    2014-01-01

    The authors investigated children's ability to recognize emotions from the information available in the lower, middle, or upper face. School-age children were shown partial or complete facial expressions and asked to say whether they corresponded to a given emotion (anger, fear, surprise, or disgust). The results indicate that 5-year-olds were able to recognize fear, anger, and surprise from partial facial expressions. Fear was better recognized from the information located in the upper face than those located in the lower face. A similar pattern of results was found for anger, but only in girls. Recognition improved between 5 and 10 years old for surprise and anger, but not for fear and disgust.

  10. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    PubMed Central

    Saneiro, Mar; Salmeron-Majadas, Sergio

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support. PMID:24892055

  11. The influence of emotional facial expressions on gaze-following in grouped and solitary pedestrians.

    PubMed

    Gallup, Andrew C; Chong, Andrew; Kacelnik, Alex; Krebs, John R; Couzin, Iain D

    2014-07-23

    The mechanisms contributing to collective attention in humans remain unclear. Research indicates that pedestrians utilise the gaze direction of others nearby to acquire environmentally relevant information, but it is not known which, if any, additional social cues influence this transmission. Extending upon previous field studies, we investigated whether gaze cues paired with emotional facial expressions (neutral, happy, suspicious and fearsome) of an oncoming walking confederate modulate gaze-following by pedestrians moving in a natural corridor. We found that pedestrians walking alone were not sensitive to this manipulation, while individuals traveling together in groups did reliably alter their response in relation to emotional cues. In particular, members of a collective were more likely to follow gaze cues indicative of a potential threat (i.e., suspicious or fearful facial expression). This modulation of visual attention dependent on whether pedestrians are in social aggregates may be important to drive adaptive exploitation of social information, and particularly emotional stimuli within natural contexts.

  12. Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion.

    PubMed

    Diéguez-Risco, Teresa; Aguado, Luis; Albert, Jacobo; Hinojosa, José Antonio

    2015-12-01

    The influence of explicit evaluative processes on the contextual integration of facial expressions of emotion was studied in a procedure that required the participants to judge the congruency of happy and angry faces with preceding sentences describing emotion-inducing situations. Judgments were faster on congruent trials in the case of happy faces and on incongruent trials in the case of angry faces. At the electrophysiological level, a congruency effect was observed in the face-sensitive N170 component that showed larger amplitudes on incongruent trials. An interactive effect of congruency and emotion appeared on the LPP (late positive potential), with larger amplitudes in response to happy faces that followed anger-inducing situations. These results show that the deliberate intention to judge the contextual congruency of facial expressions influences not only processes involved in affective evaluation such as those indexed by the LPP but also earlier processing stages that are involved in face perception.

  13. Perceptions of Emotion from Facial Expressions are Not Culturally Universal: Evidence from a Remote Culture

    PubMed Central

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-01-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to US emotion concepts embedded in experiments. Since such cues are standard feature in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the US and the Himba ethnic group sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed “universal” pattern, whereas US participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed “universal” pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts. PMID:24708506

  14. Personality influences the neural responses to viewing facial expressions of emotion.

    PubMed

    Calder, Andrew J; Ewbank, Michael; Passamonti, Luca

    2011-06-12

    Cognitive research has long been aware of the relationship between individual differences in personality and performance on behavioural tasks. However, within the field of cognitive neuroscience, the way in which such differences manifest at a neural level has received relatively little attention. We review recent research addressing the relationship between personality traits and the neural response to viewing facial signals of emotion. In one section, we discuss work demonstrating the relationship between anxiety and the amygdala response to facial signals of threat. A second section considers research showing that individual differences in reward drive (behavioural activation system), a trait linked to aggression, influence the neural responsivity and connectivity between brain regions implicated in aggression when viewing facial signals of anger. Finally, we address recent criticisms of the correlational approach to fMRI analyses and conclude that when used appropriately, analyses examining the relationship between personality and brain activity provide a useful tool for understanding the neural basis of facial expression processing and emotion processing in general.

  15. Comparative gene expression analysis of avian embryonic facial structures reveals new candidates for human craniofacial disorders.

    PubMed

    Brugmann, S A; Powder, K E; Young, N M; Goodnough, L H; Hahn, S M; James, A W; Helms, J A; Lovett, M

    2010-03-01

    Mammals and birds have common embryological facial structures, and appear to employ the same molecular genetic developmental toolkit. We utilized natural variation found in bird beaks to investigate what genes drive vertebrate facial morphogenesis. We employed cross-species microarrays to describe the molecular genetic signatures, developmental signaling pathways and the spectrum of transcription factor (TF) gene expression changes that differ between cranial neural crest cells in the developing beaks of ducks, quails and chickens. Surprisingly, we observed that the neural crest cells established a species-specific TF gene expression profile that predates morphological differences between the species. A total of 232 genes were differentially expressed between the three species. Twenty-two of these genes, including Fgfr2, Jagged2, Msx2, Satb2 and Tgfb3, have been previously implicated in a variety of mammalian craniofacial defects. Seventy-two of the differentially expressed genes overlap with un-cloned loci for human craniofacial disorders, suggesting that our data will provide a valuable candidate gene resource for human craniofacial genetics. The most dramatic changes between species were in the Wnt signaling pathway, including a 20-fold up-regulation of Dkk2, Fzd1 and Wnt1 in the duck compared with the other two species. We functionally validated these changes by demonstrating that spatial domains of Wnt activity differ in avian beaks, and that Wnt signals regulate Bmp pathway activity and promote regional growth in facial prominences. This study is the first of its kind, extending on previous work in Darwin's finches and provides the first large-scale insights into cross-species facial morphogenesis.

  16. 3D Facial Pattern Analysis for Autism

    DTIC Science & Technology

    2010-07-01

    each individual’s data were scaled by the geometric mean of all possible linear distances between landmarks, following. The first two principal...over traditional template matching in that it can represent geometrical and non- geometrical changes of an object in the parametric template space...set of vertex templates can be generated from the root template by geometric or non- geometric transformation. Let Mtt ,...1 be M normalized vertex

  17. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion.

    PubMed

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information.

  18. Tactile Stimulation of the Face and the Production of Facial Expressions Activate Neurons in the Primate Amygdala.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Fuglevand, Andrew J; Gothard, Katalin M

    2016-01-01

    The majority of neurophysiological studies that have explored the role of the primate amygdala in the evaluation of social signals have relied on visual stimuli such as images of facial expressions. Vision, however, is not the only sensory modality that carries social signals. Both humans and nonhuman primates exchange emotionally meaningful social signals through touch. Indeed, social grooming in nonhuman primates and caressing touch in humans is critical for building lasting and reassuring social bonds. To determine the role of the amygdala in processing touch, we recorded the responses of single neurons in the macaque amygdala while we applied tactile stimuli to the face. We found that one-third of the recorded neurons responded to tactile stimulation. Although we recorded exclusively from the right amygdala, the receptive fields of 98% of the neurons were bilateral. A fraction of these tactile neurons were monitored during the production of facial expressions and during facial movements elicited occasionally by touch stimuli. Firing rates arising during the production of facial expressions were similar to those elicited by tactile stimulation. In a subset of cells, combining tactile stimulation with facial movement further augmented the firing rates. This suggests that tactile neurons in the amygdala receive input from skin mechanoceptors that are activated by touch and by compressions and stretches of the facial skin during the contraction of the underlying muscles. Tactile neurons in the amygdala may play a role in extracting the valence of touch stimuli and/or monitoring the facial expressions of self during social interactions.

  19. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex.

    PubMed

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-02-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2×2×2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions.

  20. Low doses of alcohol have a selective effect on the recognition of happy facial expressions.

    PubMed

    Kano, Michiko; Gyoba, Jiro; Kamachi, Miyuki; Mochizuki, Hideki; Hongo, Michio; Yanai, Kazuhiko

    2003-03-01

    Alcohol is one of the most widely used recreational drugs, yet it is associated with undesirable social behaviour. It is used primarily for its psychoactive properties, increasing sociability and talkativeness. We hypothesize that low doses of alcohol can improve the performance related to positive emotional cognition. In this experiment, we examined the effect of low doses of alcohol on the processing of emotional facial expressions. Fifteen young male volunteers drank alcohol at volumes of 30, 60, 120 ml (0.14, 0.28, 0.56 g/kg) and performed discrimination tasks on morphed facial emotion expressions of anger, happiness, sadness and surprise-neutral. One-way ANOVA co-varying pretreatment performances revealed significant differences between alcohol levels in happy face discrimination ( p<0.01). Bonferroni correction demonstrated that low doses of alcohol caused a significantly better discrimination of happy faces, and that the performances were worse with higher doses ( p<0.001). No significance was observed with the other three emotional faces. These results indicate that low doses of alcohol affect positive emotional cognition of happy facial expressions.

  1. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children

    PubMed Central

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano

    2015-01-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651

  2. Effects of gaze direction, head orientation and valence of facial expression on amygdala activity.

    PubMed

    Sauer, Andreas; Mothes-Lasch, Martin; Miltner, Wolfgang H R; Straube, Thomas

    2014-08-01

    There is increasing evidence for a role of the amygdala in processing gaze direction and emotional relevance of faces. In this event-related functional magnetic resonance study we investigated amygdala responses while we orthogonally manipulated head direction, gaze direction and facial expression (angry, happy and neutral). This allowed us to investigate effects of stimulus ambiguity, low-level factors and non-emotional factors on amygdala activation. Averted vs direct gaze induced increased activation in the right dorsal amygdala regardless of facial expression and head orientation. Furthermore, valence effects were found in the ventral amygdala and strongly dependent on head orientation. We observed enhanced activation to angry and neutral vs happy faces for observer-directed faces in the left ventral amygdala while the averted head condition reversed this pattern resulting in increased activation to happy as compared to angry and neutral faces. These results suggest that gaze direction drives specifically dorsal amygdala activation regardless of facial expression, low-level perceptual factors or stimulus ambiguity. The role of the amygdala is thus not restricted to the detection of potential threat, but has a more general role in attention processes. Furthermore, valence effects are associated with activation of the ventral amygdala and strongly influenced by non-emotional factors.

  3. Direction of Amygdala-Neocortex Interaction During Dynamic Facial Expression Processing.

    PubMed

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Yoshikawa, Sakiko; Toichi, Motomi

    2016-02-22

    Dynamic facial expressions of emotion strongly elicit multifaceted emotional, perceptual, cognitive, and motor responses. Neuroimaging studies revealed that some subcortical (e.g., amygdala) and neocortical (e.g., superior temporal sulcus and inferior frontal gyrus) brain regions and their functional interaction were involved in processing dynamic facial expressions. However, the direction of the functional interaction between the amygdala and the neocortex remains unknown. To investigate this issue, we re-analyzed functional magnetic resonance imaging (fMRI) data from 2 studies and magnetoencephalography (MEG) data from 1 study. First, a psychophysiological interaction analysis of the fMRI data confirmed the functional interaction between the amygdala and neocortical regions. Then, dynamic causal modeling analysis was used to compare models with forward, backward, or bidirectional effective connectivity between the amygdala and neocortical networks in the fMRI and MEG data. The results consistently supported the model of effective connectivity from the amygdala to the neocortex. Further increasing time-window analysis of the MEG demonstrated that this model was valid after 200 ms from the stimulus onset. These data suggest that emotional processing in the amygdala rapidly modulates some neocortical processing, such as perception, recognition, and motor mimicry, when observing dynamic facial expressions of emotion.

  4. Videos of conspecifics elicit interactive looking patterns and facial expressions in monkeys.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2011-08-01

    A broader understanding of the neural basis of social behavior in primates requires the use of species-specific stimuli that elicit spontaneous, but reproducible and tractable behaviors. In this context of natural behaviors, individual variation can further inform about the factors that influence social interactions. To approximate natural social interactions similar to those documented by field studies, we used unedited video footage to induce in viewer monkeys spontaneous facial expressions and looking patterns in the laboratory setting. Three adult male monkeys (Macaca mulatta), previously behaviorally and genetically (5-HTTLPR) characterized, were monitored while they watched 10 s video segments depicting unfamiliar monkeys (movie monkeys) displaying affiliative, neutral, and aggressive behaviors. The gaze and head orientation of the movie monkeys alternated between "averted" and "directed" at the viewer. The viewers were not reinforced for watching the movies, thus their looking patterns indicated their interest and social engagement with the stimuli. The behavior of the movie monkey accounted for differences in the looking patterns and facial expressions displayed by the viewers. We also found multiple significant differences in the behavior of the viewers that correlated with their interest in these stimuli. These socially relevant dynamic stimuli elicited spontaneous social behaviors, such as eye-contact induced reciprocation of facial expression, gaze aversion, and gaze following, that were previously not observed in response to static images. This approach opens a unique opportunity to understanding the mechanisms that trigger spontaneous social behaviors in humans and nonhuman primates.

  5. Age-Related Response Bias in the Decoding of Sad Facial Expressions

    PubMed Central

    Fölster, Mara; Hess, Ursula; Hühnel, Isabell; Werheid, Katja

    2015-01-01

    Recent studies have found that age is negatively associated with the accuracy of decoding emotional facial expressions; this effect of age was found for actors as well as for raters. Given that motivational differences and stereotypes may bias the attribution of emotion, the aim of the present study was to explore whether these age effects are due to response bias, that is, the unbalanced use of response categories. Thirty younger raters (19–30 years) and thirty older raters (65–81 years) viewed video clips of younger and older actors representing the same age ranges, and decoded their facial expressions. We computed both raw hit rates and bias-corrected hit rates to assess the influence of potential age-related response bias on decoding accuracy. Whereas raw hit rates indicated significant effects of both the actors’ and the raters’ ages on decoding accuracy for sadness, these age effects were no longer significant when response bias was corrected. Our results suggest that age effects on the accuracy of decoding facial expressions may be due, at least in part, to age-related response bias. PMID:26516920

  6. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  7. 3D expression patterns of cell cycle genes in the developing chick wing and comparison with expression patterns of genes implicated in digit specification.

    PubMed

    Welten, Monique; Pavlovska, Gordana; Chen, Yu; Teruoka, Yuko; Fisher, Malcolm; Bangs, Fiona; Towers, Matthew; Tickle, Cheryll

    2011-05-01

    Sonic hedgehog (Shh) signalling controls integrated specification of digit pattern and growth in the chick wing but downstream gene networks remain to be unravelled. We analysed 3D expression patterns of genes encoding cell cycle regulators using Optical Projection Tomography. Hierarchical clustering of spatial matrices of gene expression revealed a dorsal layer of the wing bud, in which almost all genes were expressed, and that genes encoding positive cell cycle regulators had similar expression patterns while those of N-myc and CyclinD2 were distinct but closely related. We compared these patterns computationally with those of genes implicated in digit specification and Ptch1, 50 genes in total. Nineteen genes have similar posterior expression to Ptch1, including Hoxd13, Sall1, Hoxd11, and Bmp2, all likely Gli targets in mouse limb, and cell cycle genes, N-myc, CyclinD2. We suggest that these genes contribute to a network integrating digit specification and growth in response to Shh.

  8. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  9. Do Infants Show Distinct Negative Facial Expressions for Fear and Anger? Emotional Expression in 11-Month-Old European American, Chinese, and Japanese Infants

    ERIC Educational Resources Information Center

    Camras, Linda A.; Oster, Harriet; Bakeman, Roger; Meng, Zhaolan; Ujiie, Tatsuo; Campos, Joseph J.

    2007-01-01

    Do infants show distinct negative facial expressions for different negative emotions? To address this question, European American, Chinese, and Japanese 11-month-olds were videotaped during procedures designed to elicit mild anger or frustration and fear. Facial behavior was coded using Baby FACS, an anatomically based scoring system. Infants'…

  10. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  11. Spatial and temporal analysis of gene expression during growth and fusion of the mouse facial prominences.

    PubMed

    Feng, Weiguo; Leach, Sonia M; Tipney, Hannah; Phang, Tzulip; Geraci, Mark; Spritz, Richard A; Hunter, Lawrence E; Williams, Trevor

    2009-12-16

    Orofacial malformations resulting from genetic and/or environmental causes are frequent human birth defects yet their etiology is often unclear because of insufficient information concerning the molecular, cellular and morphogenetic processes responsible for normal facial development. We have, therefore, derived a comprehensive expression dataset for mouse orofacial development, interrogating three distinct regions - the mandibular, maxillary and frontonasal prominences. To capture the dynamic changes in the transcriptome during face formation, we sampled five time points between E10.5-E12.5, spanning the developmental period from establishment of the prominences to their fusion to form the mature facial platform. Seven independent biological replicates were used for each sample ensuring robustness and quality of the dataset. Here, we provide a general overview of the dataset, characterizing aspects of gene expression changes at both the spatial and temporal level. Considerable coordinate regulation occurs across the three prominences during this period of facial growth and morphogenesis, with a switch from expression of genes involved in cell proliferation to those associated with differentiation. An accompanying shift in the expression of polycomb and trithorax genes presumably maintains appropriate patterns of gene expression in precursor or differentiated cells, respectively. Superimposed on the many coordinated changes are prominence-specific differences in the expression of genes encoding transcription factors, extracellular matrix components, and signaling molecules. Thus, the elaboration of each prominence will be driven by particular combinations of transcription factors coupled with specific cell:cell and cell:matrix interactions. The dataset also reveals several prominence-specific genes not previously associated with orofacial development, a subset of which we externally validate. Several of these latter genes are components of bidirectional

  12. Spatial and Temporal Analysis of Gene Expression during Growth and Fusion of the Mouse Facial Prominences

    PubMed Central

    Feng, Weiguo; Leach, Sonia M.; Tipney, Hannah; Phang, Tzulip; Geraci, Mark; Spritz, Richard A.; Hunter, Lawrence E.; Williams, Trevor

    2009-01-01

    Orofacial malformations resulting from genetic and/or environmental causes are frequent human birth defects yet their etiology is often unclear because of insufficient information concerning the molecular, cellular and morphogenetic processes responsible for normal facial development. We have, therefore, derived a comprehensive expression dataset for mouse orofacial development, interrogating three distinct regions – the mandibular, maxillary and frontonasal prominences. To capture the dynamic changes in the transcriptome during face formation, we sampled five time points between E10.5–E12.5, spanning the developmental period from establishment of the prominences to their fusion to form the mature facial platform. Seven independent biological replicates were used for each sample ensuring robustness and quality of the dataset. Here, we provide a general overview of the dataset, characterizing aspects of gene expression changes at both the spatial and temporal level. Considerable coordinate regulation occurs across the three prominences during this period of facial growth and morphogenesis, with a switch from expression of genes involved in cell proliferation to those associated with differentiation. An accompanying shift in the expression of polycomb and trithorax genes presumably maintains appropriate patterns of gene expression in precursor or differentiated cells, respectively. Superimposed on the many coordinated changes are prominence-specific differences in the expression of genes encoding transcription factors, extracellular matrix components, and signaling molecules. Thus, the elaboration of each prominence will be driven by particular combinations of transcription factors coupled with specific cell:cell and cell:matrix interactions. The dataset also reveals several prominence-specific genes not previously associated with orofacial development, a subset of which we externally validate. Several of these latter genes are components of bidirectional

  13. Inducing a concurrent motor load reduces categorization precision for facial expressions.

    PubMed

    Ipser, Alberta; Cook, Richard

    2016-05-01

    Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base.

  14. Visual field bias in hearing and deaf adults during judgments of facial expression and identity

    PubMed Central

    Letourneau, Susan M.; Mitchell, Teresa V.

    2013-01-01

    The dominance of the right hemisphere during face perception is associated with more accurate judgments of faces presented in the left rather than the right visual field (RVF). Previous research suggests that the left visual field (LVF) bias typically observed during face perception tasks is reduced in deaf adults who use sign language, for whom facial expressions convey important linguistic information. The current study examined whether visual field biases were altered in deaf adults whenever they viewed expressive faces, or only when attention was explicitly directed to expression. Twelve hearing adults and 12 deaf signers were trained to recognize a set of novel faces posing various emotional expressions. They then judged the familiarity or emotion of faces presented in the left or RVF, or both visual fields simultaneously. The same familiar and unfamiliar faces posing neutral and happy expressions were presented in the two tasks. Both groups were most accurate when faces were presented in both visual fields. Across tasks, the hearing group demonstrated a bias toward the LVF. In contrast, the deaf group showed a bias toward the LVF during identity judgments that shifted marginally toward the RVF duri