Sample records for specific facial features

  1. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    ERIC Educational Resources Information Center

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  2. Testosterone-mediated sex differences in the face shape during adolescence: subjective impressions and objective features.

    PubMed

    Marečková, Klára; Weinbrand, Zohar; Chakravarty, M Mallar; Lawrence, Claire; Aleong, Rosanne; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2011-11-01

    Sex identification of a face is essential for social cognition. Still, perceptual cues indicating the sex of a face, and mechanisms underlying their development, remain poorly understood. Previously, our group described objective age- and sex-related differences in faces of healthy male and female adolescents (12-18 years of age), as derived from magnetic resonance images (MRIs) of the adolescents' heads. In this study, we presented these adolescent faces to 60 female raters to determine which facial features most reliably predicted subjective sex identification. Identification accuracy correlated highly with specific MRI-derived facial features (e.g. broader forehead, chin, jaw, and nose). Facial features that most reliably cued male identity were associated with plasma levels of testosterone (above and beyond age). Perceptible sex differences in face shape are thus associated with specific facial features whose emergence may be, in part, driven by testosterone. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  4. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature.

    PubMed

    Tender, Jennifer A F; Ferreira, Carlos R

    2018-04-13

    Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls.

  5. Face processing in chronic alcoholism: a specific deficit for emotional features.

    PubMed

    Maurage, P; Campanella, S; Philippot, P; Martin, S; de Timary, P

    2008-04-01

    It is well established that chronic alcoholism is associated with a deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specifically for emotions or due to a more general impairment in visual or facial processing. This study was designed to clarify this issue using multiple control tasks and the subtraction method. Eighteen patients suffering from chronic alcoholism and 18 matched healthy control subjects were asked to perform several tasks evaluating (1) Basic visuo-spatial and facial identity processing; (2) Simple reaction times; (3) Complex facial features identification (namely age, emotion, gender, and race). Accuracy and reaction times were recorded. Alcoholic patients had a preserved performance for visuo-spatial and facial identity processing, but their performance was impaired for visuo-motor abilities and for the detection of complex facial aspects. More importantly, the subtraction method showed that alcoholism is associated with a specific EFE decoding deficit, still present when visuo-motor slowing down is controlled for. These results offer a post hoc confirmation of earlier data showing an EFE decoding deficit in alcoholism by strongly suggesting a specificity of this deficit for emotions. This may have implications for clinical situations, where emotional impairments are frequently observed among alcoholic subjects.

  6. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    PubMed

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  7. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature

    PubMed Central

    Tender, Jennifer A.F.; Ferreira, Carlos R.

    2018-01-01

    BACKGROUND: Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. OBJECTIVE: To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. METHODS: We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. CONCLUSION: The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls. PMID:29682451

  8. Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces.

    PubMed

    Brielmann, Aenne A; Bülthoff, Isabelle; Armann, Regine

    2014-07-01

    Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: (1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? (2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face's race. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. An extensive analysis of various texture feature extractors to detect Diabetes Mellitus using facial specific regions.

    PubMed

    Shu, Ting; Zhang, Bob; Yan Tang, Yuan

    2017-04-01

    Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. External facial features modify the representation of internal facial features in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2010-08-15

    Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  11. Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.

    PubMed

    Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz

    2015-04-01

    Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (c) 2015 APA, all rights reserved).

  12. Development of Sensitivity to Spacing Versus Feature Changes in Pictures of Houses: Evidence for Slow Development of a General Spacing Detection Mechanism?

    ERIC Educational Resources Information Center

    Robbins, Rachel A.; Shergill, Yaadwinder; Maurer, Daphne; Lewis, Terri L.

    2011-01-01

    Adults are expert at recognizing faces, in part because of exquisite sensitivity to the spacing of facial features. Children are poorer than adults at recognizing facial identity and less sensitive to spacing differences. Here we examined the specificity of the immaturity by comparing the ability of 8-year-olds, 14-year-olds, and adults to…

  13. Human Facial Shape and Size Heritability and Genetic Correlations.

    PubMed

    Cole, Joanne B; Manyama, Mange; Larson, Jacinda R; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Li, Mao; Mio, Washington; Klein, Ophir D; Santorico, Stephanie A; Hallgrímsson, Benedikt; Spritz, Richard A

    2017-02-01

    The human face is an array of variable physical features that together make each of us unique and distinguishable. Striking familial facial similarities underscore a genetic component, but little is known of the genes that underlie facial shape differences. Numerous studies have estimated facial shape heritability using various methods. Here, we used advanced three-dimensional imaging technology and quantitative human genetics analysis to estimate narrow-sense heritability, heritability explained by common genetic variation, and pairwise genetic correlations of 38 measures of facial shape and size in normal African Bantu children from Tanzania. Specifically, we fit a linear mixed model of genetic relatedness between close and distant relatives to jointly estimate variance components that correspond to heritability explained by genome-wide common genetic variation and variance explained by uncaptured genetic variation, the sum representing total narrow-sense heritability. Our significant estimates for narrow-sense heritability of specific facial traits range from 28 to 67%, with horizontal measures being slightly more heritable than vertical or depth measures. Furthermore, for over half of facial traits, >90% of narrow-sense heritability can be explained by common genetic variation. We also find high absolute genetic correlation between most traits, indicating large overlap in underlying genetic loci. Not surprisingly, traits measured in the same physical orientation (i.e., both horizontal or both vertical) have high positive genetic correlations, whereas traits in opposite orientations have high negative correlations. The complex genetic architecture of facial shape informs our understanding of the intricate relationships among different facial features as well as overall facial development. Copyright © 2017 by the Genetics Society of America.

  14. Consensus on Changing Trends, Attitudes, and Concepts of Asian Beauty.

    PubMed

    Liew, Steven; Wu, Woffles T L; Chan, Henry H; Ho, Wilson W S; Kim, Hee-Jin; Goodman, Greg J; Peng, Peter H L; Rogers, John D

    2016-04-01

    Asians increasingly seek non-surgical facial esthetic treatments, especially at younger ages. Published recommendations and clinical evidence mostly reference Western populations, but Asians differ from them in terms of attitudes to beauty, structural facial anatomy, and signs and rates of aging. A thorough knowledge of the key esthetic concerns and requirements for the Asian face is required to strategize appropriate facial esthetic treatments with botulinum toxin and hyaluronic acid (HA) fillers. The Asian Facial Aesthetics Expert Consensus Group met to develop consensus statements on concepts of facial beauty, key esthetic concerns, facial anatomy, and aging in Southeastern and Eastern Asians, as a prelude to developing consensus opinions on the cosmetic facial use of botulinum toxin and HA fillers in these populations. Beautiful and esthetically attractive people of all races share similarities in appearance while retaining distinct ethnic features. Asians between the third and sixth decades age well compared with age-matched Caucasians. Younger Asians' increasing requests for injectable treatments to improve facial shape and three-dimensionality often reflect a desire to correct underlying facial structural deficiencies or weaknesses that detract from ideals of facial beauty. Facial esthetic treatments in Asians are not aimed at Westernization, but rather the optimization of intrinsic Asian ethnic features, or correction of specific underlying structural features that are perceived as deficiencies. Thus, overall facial attractiveness is enhanced while retaining esthetic characteristics of Asian ethnicity. Because Asian patients age differently than Western patients, different management and treatment planning strategies are utilized. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266.

  15. Behavioural Phenotype in Borjeson-Forssman-Lehmann Syndrome

    ERIC Educational Resources Information Center

    de Winter, C. F.; van Dijk, F.; Stolker, J. J.; Hennekam, R. C. M.

    2009-01-01

    Background: Borjeson-Forssman-Lehmann syndrome (BFLs) is an X-linked inherited disorder characterised by unusual facial features, abnormal fat distribution and intellectual disability. As many genetically determined disorders are characterised not only by physical features but also by specific behaviour, we studied whether a specific behavioural…

  16. Cultural perspectives on children's tadpole drawings: at the interface between representation and production.

    PubMed

    Gernhardt, Ariane; Rübeling, Hartmut; Keller, Heidi

    2015-01-01

    This study investigated tadpole self-drawings from 183 three- to six-year-old children living in seven cultural groups, representing three ecosocial contexts. Based on assumed general production principles, the influence of cultural norms and values upon specific characteristics of the tadpole drawings was examined. The results demonstrated that children from all cultural groups realized the body-proportion effect in the self-drawings, indicating universal production principles. However, children differed in single drawing characteristics, depending on the specific ecosocial context. Children from Western and non-Western urban educated contexts drew themselves rather tall, with many facial features, and preferred smiling facial expressions, while children from rural traditional contexts depicted themselves significantly smaller, with less facial details, and neutral facial expressions.

  17. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A new atlas for the evaluation of facial features: advantages, limits, and applicability.

    PubMed

    Ritz-Timme, Stefanie; Gabriel, Peter; Obertovà, Zuzana; Boguslawski, Melanie; Mayer, F; Drabik, A; Poppa, Pasquale; De Angelis, Danilo; Ciaffi, Romina; Zanotti, Benedetta; Gibelli, Daniele; Cattaneo, Cristina

    2011-03-01

    Methods for the verification of the identity of offenders in cases involving video-surveillance images in criminal investigation events are currently under scrutiny by several forensic experts around the globe. The anthroposcopic, or morphological, approach based on facial features is the most frequently used by international forensic experts. However, a specific set of applicable features has not yet been agreed on by the experts. Furthermore, population frequencies of such features have not been recorded, and only few validation tests have been published. To combat and prevent crime in Europe, the European Commission funded an extensive research project dedicated to the optimization of methods for facial identification of persons on photographs. Within this research project, standardized photographs of 900 males between 20 and 31 years of age from Germany, Italy, and Lithuania were acquired. Based on these photographs, 43 facial features were described and evaluated in detail. These efforts led to the development of a new model of a morphologic atlas, called DMV atlas ("Düsseldorf Milan Vilnius," from the participating cities). This study is the first attempt at verifying the feasibility of this atlas as a preliminary step to personal identification by exploring the intra- and interobserver error. The analysis yielded mismatch percentages from 19% to 39%, which reflect the subjectivity of the approach and suggest caution in verifying personal identity only from the classification of facial features. Nonetheless, the use of the atlas leads to a significant improvement of consistency in the evaluation.

  19. Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features

    ERIC Educational Resources Information Center

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-01-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…

  20. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    PubMed

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  1. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    PubMed Central

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  2. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  3. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    PubMed

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  4. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  5. Non-invasive health status detection system using Gabor filters based on facial block texture features.

    PubMed

    Shu, Ting; Zhang, Bob

    2015-04-01

    Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.

  6. Facial measurements for frame design.

    PubMed

    Tang, C Y; Tang, N; Stewart, M C

    1998-04-01

    Anthropometric data for the purpose of spectacle frame design are scarce in the literature. Definitions of facial features to be measured with existing systems of facial measurement are often not specific enough for frame design and manufacturing. Currently, for individual frame design, experienced personnel collect data with facial rules or instruments. A new measuring system is proposed, making use of a template in the form of a spectacle frame. Upon fitting the template onto a subject, most of the measuring references can be defined. Such a system can be administered by lesser-trained personnel and can be used for researches covering a larger population.

  7. Association of Frontal and Lateral Facial Attractiveness.

    PubMed

    Gu, Jeffrey T; Avilla, David; Devcic, Zlatko; Karimi, Koohyar; Wong, Brian J F

    2018-01-01

    Despite the large number of studies focused on defining frontal or lateral facial attractiveness, no reports have examined whether a significant association between frontal and lateral facial attractiveness exists. To examine the association between frontal and lateral facial attractiveness and to identify anatomical features that may influence discordance between frontal and lateral facial beauty. Paired frontal and lateral facial synthetic images of 240 white women (age range, 18-25 years) were evaluated from September 30, 2004, to September 29, 2008, using an internet-based focus group (n = 600) on an attractiveness Likert scale of 1 to 10, with 1 being least attractive and 10 being most attractive. Data analysis was performed from December 6, 2016, to March 30, 2017. The association between frontal and lateral attractiveness scores was determined using linear regression. Outliers were defined as data outside the 95% individual prediction interval. To identify features that contribute to score discordance between frontal and lateral attractiveness scores, each of these image pairs were scrutinized by an evaluator panel for facial features that were present in the frontal or lateral projections and absent in the other respective facial projections. Attractiveness scores obtained from internet-based focus groups. For the 240 white women studied (mean [SD] age, 21.4 [2.2] years), attractiveness scores ranged from 3.4 to 9.5 for frontal images and 3.3 to 9.4 for lateral images. The mean (SD) frontal attractiveness score was 6.9 (1.4), whereas the mean (SD) lateral attractiveness score was 6.4 (1.3). Simple linear regression of frontal and lateral attractiveness scores resulted in a coefficient of determination of r2 = 0.749. Eight outlier pairs were identified and analyzed by panel evaluation. Panel evaluation revealed no clinically applicable association between frontal and lateral images among outliers; however, contributory facial features were suggested. Thin upper lip, convex nose, and blunt cervicomental angle were suggested by evaluators as facial characteristics that contributed to outlier frontal or lateral attractiveness scores. This study identified a strong linear association between frontal and lateral facial attractiveness. Furthermore, specific facial landmarks responsible for the discordance between frontal and lateral facial attractiveness scores were suggested. Additional studies are necessary to determine whether correction of these landmarks may increase facial harmony and attractiveness. NA.

  8. Cultural perspectives on children’s tadpole drawings: at the interface between representation and production

    PubMed Central

    Gernhardt, Ariane; Rübeling, Hartmut; Keller, Heidi

    2015-01-01

    This study investigated tadpole self-drawings from 183 three- to six-year-old children living in seven cultural groups, representing three ecosocial contexts. Based on assumed general production principles, the influence of cultural norms and values upon specific characteristics of the tadpole drawings was examined. The results demonstrated that children from all cultural groups realized the body-proportion effect in the self-drawings, indicating universal production principles. However, children differed in single drawing characteristics, depending on the specific ecosocial context. Children from Western and non-Western urban educated contexts drew themselves rather tall, with many facial features, and preferred smiling facial expressions, while children from rural traditional contexts depicted themselves significantly smaller, with less facial details, and neutral facial expressions. PMID:26136707

  9. Differences in the Communication of Affect: Members of the Same Race versus Members of a Different Race.

    ERIC Educational Resources Information Center

    Weathers, Monica D.; Frank, Elaine M.; Spell, Leigh Ann

    2002-01-01

    Examined African Americans' and Whites' ability to recognize facial expressions and vocal prosody of predominantly white stimuli at three age groups (children, young adults, and adults). Race was a significant factor in interpreting facial expressions and prosodic features. Individuals from specific ethnic groups were most accurate in decoding…

  10. Contextual interference processing during fast categorisations of facial expressions.

    PubMed

    Frühholz, Sascha; Trautmann-Lengsfeld, Sina A; Herrmann, Manfred

    2011-09-01

    We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.

  11. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  12. Single trial classification for the categories of perceived emotional facial expressions: an event-related fMRI study

    NASA Astrophysics Data System (ADS)

    Song, Sutao; Huang, Yuxia; Long, Zhiying; Zhang, Jiacai; Chen, Gongxiang; Wang, Shuqing

    2016-03-01

    Recently, several studies have successfully applied multivariate pattern analysis methods to predict the categories of emotions. These studies are mainly focused on self-experienced emotions, such as the emotional states elicited by music or movie. In fact, most of our social interactions involve perception of emotional information from the expressions of other people, and it is an important basic skill for humans to recognize the emotional facial expressions of other people in a short time. In this study, we aimed to determine the discriminability of perceived emotional facial expressions. In a rapid event-related fMRI design, subjects were instructed to classify four categories of facial expressions (happy, disgust, angry and neutral) by pressing different buttons, and each facial expression stimulus lasted for 2s. All participants performed 5 fMRI runs. One multivariate pattern analysis method, support vector machine was trained to predict the categories of facial expressions. For feature selection, ninety masks defined from anatomical automatic labeling (AAL) atlas were firstly generated and each were treated as the input of the classifier; then, the most stable AAL areas were selected according to prediction accuracies, and comprised the final feature sets. Results showed that: for the 6 pair-wise classification conditions, the accuracy, sensitivity and specificity were all above chance prediction, among which, happy vs. neutral , angry vs. disgust achieved the lowest results. These results suggested that specific neural signatures of perceived emotional facial expressions may exist, and happy vs. neutral, angry vs. disgust might be more similar in information representation in the brain.

  13. Shy children are less sensitive to some cues to facial recognition.

    PubMed

    Brunet, Paul M; Mondloch, Catherine J; Schmidt, Louis A

    2010-02-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about pairs of faces that differed in the appearance of individual features, the shape of the external contour, or the spacing among features; their parent completed the Colorado childhood temperament inventory (CCTI). Children who scored higher on CCTI shyness made more errors than their non-shy counterparts only when discriminating faces based on the spacing of features. Differences in accuracy were not related to other scales of the CCTI. In Study 2, we showed that these differences were face-specific and cannot be attributed to differences in task difficulty. Findings suggest that shy children are less sensitive to some cues to facial recognition possibly underlying their inability to distinguish certain facial emotions in others, leading to a cascade of secondary negative effects in social behaviour.

  14. Ontogenetic and static allometry in the human face: contrasting Khoisan and Inuit.

    PubMed

    Freidline, Sarah E; Gunz, Philipp; Hublin, Jean-Jacques

    2015-09-01

    Regional differences in modern human facial features are present at birth, and ontogenetic allometry contributes to variation in adults. However, details regarding differential rates of growth and timing among regional groups are lacking. We explore ontogenetic and static allometry in a cross-sectional sample spanning Africa, Europe and North America, and evaluate tempo and mode in two regional groups with very different adult facial morphology, the Khoisan and Inuit. Semilandmark geometric morphometric methods, multivariate statistics and growth simulations were used to quantify and compare patterns of facial growth and development. Regional-specific facial morphology develops early in ontogeny. The Inuit has the most distinct morphology and exhibits heterochronic differences in development compared to other regional groups. Allometric patterns differ during early postnatal development, when significant increases in size are coupled with large amounts of shape changes. All regional groups share a common adult static allometric trajectory, which can be attributed to sexual dimorphism, and the corresponding allometric shape changes resemble developmental patterns during later ontogeny. The amount and pattern of growth and development may not be shared between regional groups, indicating that a certain degree of flexibility is allowed for in order to achieve adult size. In early postnatal development the face is less constrained compared to other parts of the cranium allowing for greater evolvability. The early development of region-specific facial features combined with heterochronic differences in timing or rate of growth, reflected in differences in facial size, suggest different patterns of postnatal growth. © 2015 Wiley Periodicals, Inc.

  15. Obstructive Sleep Apnea in Women: Study of Speech and Craniofacial Characteristics

    PubMed Central

    Tyan, Marina; Fernández Pozo, Rubén; Toledano, Doroteo; Lopez Gonzalo, Eduardo; Alcazar Ramirez, Jose Daniel; Hernandez Gomez, Luis Alfonso

    2017-01-01

    Background Obstructive sleep apnea (OSA) is a common sleep disorder characterized by frequent cessation of breathing lasting 10 seconds or longer. The diagnosis of OSA is performed through an expensive procedure, which requires an overnight stay at the hospital. This has led to several proposals based on the analysis of patients’ facial images and speech recordings as an attempt to develop simpler and cheaper methods to diagnose OSA. Objective The objective of this study was to analyze possible relationships between OSA and speech and facial features on a female population and whether these possible connections may be affected by the specific clinical characteristics in OSA population and, more specifically, to explore how the connection between OSA and speech and facial features can be affected by gender. Methods All the subjects are Spanish subjects suspected to suffer from OSA and referred to a sleep disorders unit. Voice recordings and photographs were collected in a supervised but not highly controlled way, trying to test a scenario close to a realistic clinical practice scenario where OSA is assessed using an app running on a mobile device. Furthermore, clinical variables such as weight, height, age, and cervical perimeter, which are usually reported as predictors of OSA, were also gathered. Acoustic analysis is centered in sustained vowels. Facial analysis consists of a set of local craniofacial features related to OSA, which were extracted from images after detecting facial landmarks by using the active appearance models. To study the probable OSA connection with speech and craniofacial features, correlations among apnea-hypopnea index (AHI), clinical variables, and acoustic and facial measurements were analyzed. Results The results obtained for female population indicate mainly weak correlations (r values between .20 and .39). Correlations between AHI, clinical variables, and speech features show the prevalence of formant frequencies over bandwidths, with F2/i/ being the most appropriate formant frequency for OSA prediction in women. Results obtained for male population indicate mainly very weak correlations (r values between .01 and .19). In this case, bandwidths prevail over formant frequencies. Correlations between AHI, clinical variables, and craniofacial measurements are very weak. Conclusions In accordance with previous studies, some clinical variables are found to be good predictors of OSA. Besides, strong correlations are found between AHI and some clinical variables with speech and facial features. Regarding speech feature, the results show the prevalence of formant frequency F2/i/ over the rest of features for the female population as OSA predictive feature. Although the correlation reported is weak, this study aims to find some traces that could explain the possible connection between OSA and speech in women. In the case of craniofacial measurements, results evidence that some features that can be used for predicting OSA in male patients are not suitable for testing female population. PMID:29109068

  16. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  17. A case definition and photographic screening tool for the facial phenotype of fetal alcohol syndrome.

    PubMed

    Astley, S J; Clarren, S K

    1996-07-01

    The purpose of this study was to demonstrate that a quantitative, multivariate case definition of the fetal alcohol syndrome (FAS) facial phenotype could be derived from photographs of individuals with FAS and to demonstrate how this case definition and photographic approach could be used to develop efficient, accurate, and precise screening tools, diagnostic aids, and possibly surveillance tools. Frontal facial photographs of 42 subjects (from birth to 27 years of age) with FAS were matched to 84 subjects without FAS. The study population was randomly divided in half. Group 1 was used to identify the facial features that best differentiated individuals with and without FAS. Group 2 was used for cross validation. In group 1, stepwise discriminant analysis identified three facial features (reduced palpebral fissure length/inner canthal distance ratio, smooth philtrum, and thin upper lip) as the cluster of features that differentiated individuals with and without FAS in groups 1 and 2 with 100% accuracy. Sensitivity and specificity were unaffected by race, gender, and age. The phenotypic case definition derived from photographs accurately distinguished between individuals with and without FAS, demonstrating the potential of this approach for developing screening, diagnostic, and surveillance tools. Further evaluation of the validity and generalizability of this method will be needed.

  18. Specific aspects of a combined approach to male face correction: botulinum toxin A and volumetric fillers.

    PubMed

    Scherer, Max-Adam

    2016-12-01

    Cosmetologists in the last decade face a permanently increasing number of male patients. The necessity of a gender-adjusted approach in treatment of this patient category is obvious. An adequate correction requires consideration of the anatomic and physiologic features of male faces together with a whole set of interrelated aspects of psychologic perception of the male face esthetics, socially formed understanding of masculine features and appropriate emotional expressions, also of the motivations and expectations of men coming to a cosmetologist. The author explains in detail the elaborated out of own vast experience methods of complex male face correction using the above-mentioned gender-specific approach to create a naturally looking and harmonic facial expression and appearance. The presented botulinum therapy specifics concern the injection point location and toxin doses for every point. As a result, a rather distinct smoothening of the skin profile without detriment to the facial expressiveness and gender-related features is achieved. The importance and methods of an extremely delicate approach to volumetric plasty with stabilized hyaluronic acid-based fillers in men for avoiding hypercorrection and retaining the gender-specific features are discussed. © 2016 Wiley Periodicals, Inc.

  19. Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine

    PubMed Central

    Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang

    2014-01-01

    Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342

  20. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  1. The Auditory Kuleshov Effect: Multisensory Integration in Movie Editing.

    PubMed

    Baranowski, Andreas M; Hecht, H

    2017-05-01

    Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed "Kuleshov effect." In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants' emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.

  2. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  3. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  4. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    PubMed

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified particular visual features of the mouth region that predicted this valence effect, isolating the specific visual signal that might be driving this neural valence response. Copyright © 2017 the authors 0270-6474/17/379510-09$15.00/0.

  5. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    PubMed Central

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A.; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual analysis of philogenetically old behavioral patterns like the facial expressions of emotions. PMID:28690565

  6. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors.

    PubMed

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims' recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims' performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual analysis of philogenetically old behavioral patterns like the facial expressions of emotions.

  7. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age

    PubMed Central

    Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique

    2017-01-01

    Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast—the color and luminance difference between facial features and the surrounding skin—is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20–80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger. PMID:28790941

  8. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age.

    PubMed

    Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique

    2017-01-01

    Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast-the color and luminance difference between facial features and the surrounding skin-is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20-80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.

  9. NATIONAL PREPAREDNESS: Technologies to Secure Federal Buildings

    DTIC Science & Technology

    2002-04-25

    Medium, some resistance based on sensitivity of eye Facial recognition Facial features are captured and compared Dependent on lighting, positioning...two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are...an adjacent feature. Attachment I—Access Control Technologies: Biometrics Facial Recognition How the technology works

  10. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  11. Attractiveness as a Function of Skin Tone and Facial Features: Evidence from Categorization Studies.

    PubMed

    Stepanova, Elena V; Strube, Michael J

    2018-01-01

    Participants rated the attractiveness and racial typicality of male faces varying in their facial features from Afrocentric to Eurocentric and in skin tone from dark to light in two experiments. Experiment 1 provided evidence that facial features and skin tone have an interactive effect on perceptions of attractiveness and mixed-race faces are perceived as more attractive than single-race faces. Experiment 2 further confirmed that faces with medium levels of skin tone and facial features are perceived as more attractive than faces with extreme levels of these factors. Black phenotypes (combinations of dark skin tone and Afrocentric facial features) were rated as more attractive than White phenotypes (combinations of light skin tone and Eurocentric facial features); ambiguous faces (combinations of Afrocentric and Eurocentric physiognomy) with medium levels of skin tone were rated as the most attractive in Experiment 2. Perceptions of attractiveness were relatively independent of racial categorization in both experiments.

  12. Obstructive Sleep Apnea in Women: Study of Speech and Craniofacial Characteristics.

    PubMed

    Tyan, Marina; Espinoza-Cuadros, Fernando; Fernández Pozo, Rubén; Toledano, Doroteo; Lopez Gonzalo, Eduardo; Alcazar Ramirez, Jose Daniel; Hernandez Gomez, Luis Alfonso

    2017-11-06

    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by frequent cessation of breathing lasting 10 seconds or longer. The diagnosis of OSA is performed through an expensive procedure, which requires an overnight stay at the hospital. This has led to several proposals based on the analysis of patients' facial images and speech recordings as an attempt to develop simpler and cheaper methods to diagnose OSA. The objective of this study was to analyze possible relationships between OSA and speech and facial features on a female population and whether these possible connections may be affected by the specific clinical characteristics in OSA population and, more specifically, to explore how the connection between OSA and speech and facial features can be affected by gender. All the subjects are Spanish subjects suspected to suffer from OSA and referred to a sleep disorders unit. Voice recordings and photographs were collected in a supervised but not highly controlled way, trying to test a scenario close to a realistic clinical practice scenario where OSA is assessed using an app running on a mobile device. Furthermore, clinical variables such as weight, height, age, and cervical perimeter, which are usually reported as predictors of OSA, were also gathered. Acoustic analysis is centered in sustained vowels. Facial analysis consists of a set of local craniofacial features related to OSA, which were extracted from images after detecting facial landmarks by using the active appearance models. To study the probable OSA connection with speech and craniofacial features, correlations among apnea-hypopnea index (AHI), clinical variables, and acoustic and facial measurements were analyzed. The results obtained for female population indicate mainly weak correlations (r values between .20 and .39). Correlations between AHI, clinical variables, and speech features show the prevalence of formant frequencies over bandwidths, with F2/i/ being the most appropriate formant frequency for OSA prediction in women. Results obtained for male population indicate mainly very weak correlations (r values between .01 and .19). In this case, bandwidths prevail over formant frequencies. Correlations between AHI, clinical variables, and craniofacial measurements are very weak. In accordance with previous studies, some clinical variables are found to be good predictors of OSA. Besides, strong correlations are found between AHI and some clinical variables with speech and facial features. Regarding speech feature, the results show the prevalence of formant frequency F2/i/ over the rest of features for the female population as OSA predictive feature. Although the correlation reported is weak, this study aims to find some traces that could explain the possible connection between OSA and speech in women. In the case of craniofacial measurements, results evidence that some features that can be used for predicting OSA in male patients are not suitable for testing female population. ©Marina Tyan, Fernando Espinoza-Cuadros, Rubén Fernández Pozo, Doroteo Toledano, Eduardo Lopez Gonzalo, Jose Daniel Alcazar Ramirez, Luis Alfonso Hernandez Gomez. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 06.11.2017.

  13. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  14. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  15. Facial Attractiveness Assessment using Illustrated Questionnairers

    PubMed Central

    MESAROS, ANCA; CORNEA, DANIELA; CIOARA, LIVIU; DUDEA, DIANA; MESAROS, MICHAELA; BADEA, MINDRA

    2015-01-01

    Introduction. An attractive facial appearance is considered nowadays to be a decisive factor in establishing successful interactions between humans. In relation to this topic, scientific literature states that some of the facial features have more impact then others, and important authors revealed that certain proportions between different anthropometrical landmarks are mandatory for an attractive facial appearance. Aim. Our study aims to assess if certain facial features count differently in people’s opinion while assessing facial attractiveness in correlation with factors such as age, gender, specific training and culture. Material and methods. A 5-item multiple choice illustrated questionnaire was presented to 236 dental students. The Photoshop CS3 software was used in order to obtain the sets of images for the illustrated questions. The original image was handpicked from the internet by a panel of young dentists from a series of 15 pictures of people considered to have attractive faces. For each of the questions, the images presented were simulating deviations from the ideally symmetric and proportionate face. The sets of images consisted in multiple variations of deviations mixed with the original photo. Junior and sophomore year students from our dental medical school, having different nationalities were required to participate in our questionnaire. Simple descriptive statistics were used to interpret the data. Results. Assessing the results obtained from the questionnaire it was observed that a majority of students considered as unattractive the overdevelopment of the lower third, while the initial image with perfect symmetry and proportion was considered as the most attractive by only 38.9% of the subjects. Likewise, regarding the symmetry 36.86% considered unattractive the canting of the inter-commissural line. The interviewed subjects considered that for a face to be attractive it needs to have harmonious proportions between the different facial elements. Conclusions. Considering an evaluation of facial attractiveness it is important to keep in mind that such assessment is subjective and influenced by multiple factors, among which the most important are cultural background and specific training. PMID:26528052

  16. Interpretation of Appearance: The Effect of Facial Features on First Impressions and Personality

    PubMed Central

    Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner. PMID:25233221

  17. Interpretation of appearance: the effect of facial features on first impressions and personality.

    PubMed

    Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.

  18. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  19. The perceptual saliency of fearful eyes and smiles: A signal detection study

    PubMed Central

    Saban, Muhammet Ikbal; Rotshtein, Pia

    2017-01-01

    Facial features differ in the amount of expressive information they convey. Specifically, eyes are argued to be essential for fear recognition, while smiles are crucial for recognising happy expressions. In three experiments, we tested whether expression modulates the perceptual saliency of diagnostic facial features and whether the feature’s saliency depends on the face configuration. Participants were presented with masked facial features or noise at perceptual conscious threshold. The task was to indicate whether eyes (experiments 1-3A) or a mouth (experiment 3B) was present. The expression of the face and its configuration (i.e. spatial arrangement of the features) were manipulated. Experiment 1 compared fearful with neutral expressions, experiments 2 and 3 compared fearful versus happy expressions. The detection accuracy data was analysed using Signal Detection Theory (SDT), to examine the effects of expression and configuration on perceptual precision (d’) and response bias (c), separately. Across all three experiments, fearful eyes were detected better (higher d’) than neutral and happy eyes. Eyes were more precisely detected than mouths, whereas smiles were detected better than fearful mouths. The configuration of the features had no consistent effects across the experiments on the ability to detect expressive features. But facial configuration affected consistently the response bias. Participants used a more liberal criterion for detecting the eyes in canonical configuration and fearful expression. Finally, the power in low spatial frequency of a feature predicted its discriminability index. The results suggest that expressive features are perceptually more salient with a higher d’ due to changes at the low-level visual properties, with emotions and configuration affecting perception through top-down processes, as reflected by the response bias. PMID:28267761

  20. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  1. Feature selection from a facial image for distinction of sasang constitution.

    PubMed

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho

    2009-09-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  2. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    PubMed Central

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun

    2009-01-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013

  3. The occipital face area is causally involved in the formation of identity-specific face representations.

    PubMed

    Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula

    2017-12-01

    Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.

  4. Changing facial phenotype in Cohen syndrome: towards clues for an earlier diagnosis.

    PubMed

    El Chehadeh-Djebbar, Salima; Blair, Edward; Holder-Espinasse, Muriel; Moncla, Anne; Frances, Anne-Marie; Rio, Marlène; Debray, François-Guillaume; Rump, Patrick; Masurel-Paulet, Alice; Gigot, Nadège; Callier, Patrick; Duplomb, Laurence; Aral, Bernard; Huet, Frédéric; Thauvin-Robinet, Christel; Faivre, Laurence

    2013-07-01

    Cohen syndrome (CS) is a rare autosomal recessive condition caused by mutations and/or large rearrangements in the VPS13B gene. CS clinical features, including developmental delay, the typical facial gestalt, chorioretinal dystrophy (CRD) and neutropenia, are well described. CS diagnosis is generally raised after school age, when visual disturbances lead to CRD diagnosis and to VPS13B gene testing. This relatively late diagnosis precludes accurate genetic counselling. The aim of this study was to analyse the evolution of CS facial features in the early period of life, particularly before school age (6 years), to find clues for an earlier diagnosis. Photographs of 17 patients with molecularly confirmed CS were analysed, from birth to preschool age. By comparing their facial phenotype when growing, we show that there are no special facial characteristics before 1 year. However, between 2 and 6 years, CS children already share common facial features such as a short neck, a square face with micrognathia and full cheeks, a hypotonic facial appearance, epicanthic folds, long ears with an everted upper part of the auricle and/or a prominent lobe, a relatively short philtrum, a small and open mouth with downturned corners, a thick lower lip and abnormal eye shapes. These early transient facial features evolve to typical CS facial features with aging. These observations emphasize the importance of ophthalmological tests and neutrophil count in children in preschool age presenting with developmental delay, hypotonia and the facial features we described here, for an earlier CS diagnosis.

  5. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  6. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  7. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    PubMed

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  8. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.

  9. Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.

    PubMed

    Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L

    2010-09-01

    To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.

  10. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  11. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  12. Evidence of a Shift from Featural to Configural Face Processing in Infancy

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Zauner, Nicola; Jovanovic, Bianca

    2007-01-01

    Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After…

  13. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  14. Dermatoscopic features of cutaneous non-facial non-acral lentiginous growth pattern melanomas

    PubMed Central

    Keir, Jeff

    2014-01-01

    Background: The dermatoscopic features of facial lentigo maligna (LM), facial lentigo maligna melanoma (LMM) and acral lentiginous melanoma (ALM) have been well described. This is the first description of the dermatoscopic appearance of a clinical series of cutaneous non-facial non-acral lentiginous growth pattern melanomas. Objective: To describe the dermatoscopic features of a series of cutaneous non-facial non-acral lentiginous growth pattern melanomas in an Australian skin cancer practice. Method: Single observer retrospective analysis of dermatoscopic images of a one-year series of cutaneous non-facial, non-acral melanomas reported as having a lentiginous growth pattern detected in an open access primary care skin cancer clinic in Australia. Lesions were scored for presence of classical criteria for facial LM; modified pattern analysis (“Chaos and Clues”) criteria; and the presence of two novel criteria: a lentigo-like pigment pattern lacking a lentigo-like border, and large polygons. Results: 20 melanomas occurring in 14 female and 6 male patients were included. Average patient age was 64 years (range: 44–83). Lesion distribution was: trunk 35%; upper limb 40%; and lower limb 25%. The incidences of criteria identified were: asymmetry of color or pattern (100%); lentigo-like pigment pattern lacking a lentigo-like border (90%); asymmetrically pigmented follicular openings (APFO’s) (70%); grey blue structures (70%); large polygons (45%); eccentric structureless area (15%); bright white lines (5%). 20% of the lesions had only the novel criteria and/or APFO’s. Limitations: Single observer, single center retrospective study. Conclusions: Cutaneous non-facial non-acral melanomas with a lentiginous growth pattern may have none or very few traditional criteria for the diagnosis of melanoma. Criteria that are logically expected in lesions with a lentiginous growth pattern (lentigo-like pigment pattern lacking a lentigo-like border, APFO’s) and the novel criterion of large polygons may be useful in increasing sensitivity and specificity of diagnosis of these lesions. Further study is required to establish the significance of these observations. PMID:24520520

  15. Dermatoscopic features of cutaneous non-facial non-acral lentiginous growth pattern melanomas.

    PubMed

    Keir, Jeff

    2014-01-01

    The dermatoscopic features of facial lentigo maligna (LM), facial lentigo maligna melanoma (LMM) and acral lentiginous melanoma (ALM) have been well described. This is the first description of the dermatoscopic appearance of a clinical series of cutaneous non-facial non-acral lentiginous growth pattern melanomas. To describe the dermatoscopic features of a series of cutaneous non-facial non-acral lentiginous growth pattern melanomas in an Australian skin cancer practice. Single observer retrospective analysis of dermatoscopic images of a one-year series of cutaneous non-facial, non-acral melanomas reported as having a lentiginous growth pattern detected in an open access primary care skin cancer clinic in Australia. Lesions were scored for presence of classical criteria for facial LM; modified pattern analysis ("Chaos and Clues") criteria; and the presence of two novel criteria: a lentigo-like pigment pattern lacking a lentigo-like border, and large polygons. 20 melanomas occurring in 14 female and 6 male patients were included. Average patient age was 64 years (range: 44-83). Lesion distribution was: trunk 35%; upper limb 40%; and lower limb 25%. The incidences of criteria identified were: asymmetry of color or pattern (100%); lentigo-like pigment pattern lacking a lentigo-like border (90%); asymmetrically pigmented follicular openings (APFO's) (70%); grey blue structures (70%); large polygons (45%); eccentric structureless area (15%); bright white lines (5%). 20% of the lesions had only the novel criteria and/or APFO's. Single observer, single center retrospective study. Cutaneous non-facial non-acral melanomas with a lentiginous growth pattern may have none or very few traditional criteria for the diagnosis of melanoma. Criteria that are logically expected in lesions with a lentiginous growth pattern (lentigo-like pigment pattern lacking a lentigo-like border, APFO's) and the novel criterion of large polygons may be useful in increasing sensitivity and specificity of diagnosis of these lesions. Further study is required to establish the significance of these observations.

  16. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  17. What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.

    PubMed

    Fitousi, Daniel

    2017-07-01

    A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.

  18. Perceived Attractiveness, Facial Features, and African Self-Consciousness.

    ERIC Educational Resources Information Center

    Chambers, John W., Jr.; And Others

    1994-01-01

    Investigated relationships between perceived attractiveness, facial features, and African self-consciousness (ASC) among 149 African American college students. As predicted, high ASC subjects used more positive adjectives in descriptions of strong African facial features than did medium or low ASC subjects. Results are discussed in the context of…

  19. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  20. Rigid Facial Motion Influences Featural, But Not Holistic, Face Processing

    PubMed Central

    Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2012-01-01

    We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561

  1. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  2. Speech-Language Evaluation and Rehabilitation Treatment in Floating-Harbor Syndrome: A Case Study

    ERIC Educational Resources Information Center

    Angelillo, Nicola; Di Costanzo, Brigida; Barillari, Umberto

    2010-01-01

    Floating-Harbor syndrome is a rare congenital disorder characterized by specific facial features, short stature associated with significantly delayed bone age and language impairment. Although language delay is a cardinal manifestation of this syndrome, few reports describe the specific language difficulties of these patients, particularly the…

  3. Orientation-sensitivity to facial features explains the Thatcher illusion.

    PubMed

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-10-09

    The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face. © 2014 ARVO.

  4. Facial soft biometric features for forensic face recognition.

    PubMed

    Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier

    2015-12-01

    This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Facial movements strategically camouflage involuntary social signals of face morphology.

    PubMed

    Gill, Daniel; Garrod, Oliver G B; Jack, Rachael E; Schyns, Philippe G

    2014-05-01

    Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.

  6. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    PubMed

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.

  7. Facial Orientation and Facial Shape in Extant Great Apes: A Geometric Morphometric Analysis of Covariation

    PubMed Central

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees. PMID:23441232

  8. Task-irrelevant emotion facilitates face discrimination learning.

    PubMed

    Lorenzino, Martina; Caudek, Corrado

    2015-03-01

    We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Optic nerve coloboma, Dandy-Walker malformation, microglossia, tongue hamartomata, cleft palate and apneic spells: an existing oral-facial-digital syndrome or a new variant?

    PubMed

    Toriello, Helga V; Lemire, Edmond G

    2002-01-01

    We report on a female infant with postaxial polydactyly of the hands, preaxial polydactyly of the right foot, cleft palate, microglossia and tongue hamartomata consistent with an oral-facial-digital syndrome (OFDS). The patient also had optic nerve colobomata, a Dandy-Walker malformation, micrognathia and apneic spells. This combination of clinical features has not been previously reported. This patient either expands the clinical features of one of the existing OFDS or represents a new variant. A review of the literature highlights the difficulties in making a specific diagnosis because of the different classification systems that exist in the literature.

  10. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  11. Toward End-to-End Face Recognition Through Alignment Learning

    NASA Astrophysics Data System (ADS)

    Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo

    2017-08-01

    Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.

  12. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  13. Sotos Syndrome. Clinical Exchange.

    ERIC Educational Resources Information Center

    Shuey, Elaine M.; Jamison, Kristen

    1996-01-01

    Sotos syndrome is characterized by high birth length, rapid bone growth, distinctive facial features, and possible verbal and motor delays. It is more common in males than females. Developmental deficits, specific learning problems, and speech/language delays may also occur. (DB)

  14. Can Automated Facial Expression Analysis Show Differences Between Autism and Typical Functioning?

    PubMed

    Borsos, Zsófia; Gyori, Miklos

    2017-01-01

    Exploratory analyses of emotional expressions using a commercially available facial expression recognition software are reported, from the context of a serious game for screening purposes. Our results are based on a comparative analysis of two matched groups of kindergarten-age children (high-functioning children with autism spectrum condition: n=13; typically developing children: n=13). Results indicate that this technology has the potential to identify autism-specific emotion expression features, and may play a role in affective diagnostic and assistive technologies.

  15. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  16. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  17. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    PubMed

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.

  18. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    PubMed

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  19. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  20. A small-world network model of facial emotion recognition.

    PubMed

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  1. Modeling 3D Facial Shape from DNA

    PubMed Central

    Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.

    2014-01-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  2. Young Children's Ability to Match Facial Features Typical of Race.

    ERIC Educational Resources Information Center

    Lacoste, Ronald J.

    This study examined (1) the ability of 3- and 4-year-old children to racially classify Negro and Caucasian facial features in the absence of skin color as a racial cue; and (2) the relative value attached to the facial features of Negro and Caucasian races. Subjects were 21 middle income, Caucasian children from a privately owned nursery school in…

  3. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  4. Emotion perception across cultures: the role of cognitive mechanisms

    PubMed Central

    Engelmann, Jan B.; Pogosyan, Marianna

    2012-01-01

    Despite consistently documented cultural differences in the perception of facial expressions of emotion, the role of culture in shaping cognitive mechanisms that are central to emotion perception has received relatively little attention in past research. We review recent developments in cross-cultural psychology that provide particular insights into the modulatory role of culture on cognitive mechanisms involved in interpretations of facial expressions of emotion through two distinct routes: display rules and cognitive styles. Investigations of emotion intensity perception have demonstrated that facial expressions with varying levels of intensity of positive affect are perceived and categorized differently across cultures. Specifically, recent findings indicating significant levels of differentiation between intensity levels of facial expressions among American participants, as well as deviations from clear categorization of high and low intensity expressions among Japanese and Russian participants, suggest that display rules shape mental representations of emotions, such as intensity levels of emotion prototypes. Furthermore, a series of recent studies using eye tracking as a proxy for overt attention during face perception have identified culture-specific cognitive styles, such as the propensity to attend to very specific features of the face. Together, these results suggest a cascade of cultural influences on cognitive mechanisms involved in interpretations of facial expressions of emotion, whereby cultures impart specific behavioral practices that shape the way individuals process information from the environment. These cultural influences lead to differences in cognitive styles due to culture-specific attentional biases and emotion prototypes, which partially account for the gradient of cultural agreements and disagreements obtained in past investigations of emotion perception. PMID:23486743

  5. Emotion perception across cultures: the role of cognitive mechanisms.

    PubMed

    Engelmann, Jan B; Pogosyan, Marianna

    2013-01-01

    Despite consistently documented cultural differences in the perception of facial expressions of emotion, the role of culture in shaping cognitive mechanisms that are central to emotion perception has received relatively little attention in past research. We review recent developments in cross-cultural psychology that provide particular insights into the modulatory role of culture on cognitive mechanisms involved in interpretations of facial expressions of emotion through two distinct routes: display rules and cognitive styles. Investigations of emotion intensity perception have demonstrated that facial expressions with varying levels of intensity of positive affect are perceived and categorized differently across cultures. Specifically, recent findings indicating significant levels of differentiation between intensity levels of facial expressions among American participants, as well as deviations from clear categorization of high and low intensity expressions among Japanese and Russian participants, suggest that display rules shape mental representations of emotions, such as intensity levels of emotion prototypes. Furthermore, a series of recent studies using eye tracking as a proxy for overt attention during face perception have identified culture-specific cognitive styles, such as the propensity to attend to very specific features of the face. Together, these results suggest a cascade of cultural influences on cognitive mechanisms involved in interpretations of facial expressions of emotion, whereby cultures impart specific behavioral practices that shape the way individuals process information from the environment. These cultural influences lead to differences in cognitive styles due to culture-specific attentional biases and emotion prototypes, which partially account for the gradient of cultural agreements and disagreements obtained in past investigations of emotion perception.

  6. The shape of facial features and the spacing among them generate similar inversion effects: a reply to Rossion (2008).

    PubMed

    Yovel, Galit

    2009-11-01

    It is often argued that picture-plane face inversion impairs discrimination of the spacing among face features to a greater extent than the identity of the facial features. However, several recent studies have reported similar inversion effects for both types of face manipulations. In a recent review, Rossion (2008) claimed that similar inversion effects for spacing and features are due to methodological and conceptual shortcomings and that data still support the idea that inversion impairs the discrimination of features less than that of the spacing among them. Here I will claim that when facial features differ primarily in shape, the effect of inversion on features is not smaller than the one on spacing. It is when color/contrast information is added to facial features that the inversion effect on features decreases. This obvious observation accounts for the discrepancy in the literature and suggests that the large inversion effect that was found for features that differ in shape is not a methodological artifact. These findings together with other data that are discussed are consistent with the idea that the shape of facial features and the spacing among them are integrated rather than dissociated in the holistic representation of faces.

  7. Automatic facial animation parameters extraction in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  8. How do typically developing deaf children and deaf children with autism spectrum disorder use the face when comprehending emotional facial expressions in British sign language?

    PubMed

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-10-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ability to judge emotion in a signed utterance is impaired (Reilly et al. in Sign Lang Stud 75:113-118, 1992). We examined the role of the face in the comprehension of emotion in sign language in a group of typically developing (TD) deaf children and in a group of deaf children with autism spectrum disorder (ASD). We replicated Reilly et al.'s (Sign Lang Stud 75:113-118, 1992) adult results in the TD deaf signing children, confirming the importance of the face in understanding emotion in sign language. The ASD group performed more poorly on the emotion recognition task than the TD children. The deaf children with ASD showed a deficit in emotion recognition during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD.

  9. Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis

    PubMed Central

    Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana

    2012-01-01

    Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153

  10. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    PubMed

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  11. Quantitative Anthropometric Measures of Facial Appearance of Healthy Hispanic/Latino White Children: Establishing Reference Data for Care of Cleft Lip With or Without Cleft Palate

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Ku, Brian; Combs, Patrick D.; Da Silveira, Adriana. C.; Markey, Mia K.

    2017-06-01

    Cleft lip with or without cleft palate (CL ± P) is one of the most common congenital facial deformities worldwide. To minimize negative social consequences of CL ± P, reconstructive surgery is conducted to modify the face to a more normal appearance. Each race/ethnic group requires its own facial norm data, yet there are no existing facial norm data for Hispanic/Latino White children. The objective of this paper is to identify measures of facial appearance relevant for planning reconstructive surgery for CL ± P of Hispanic/Latino White children. Quantitative analysis was conducted on 3D facial images of 82 (41 girls, 41 boys) healthy Hispanic/Latino White children whose ages ranged from 7 to 12 years. Twenty-eight facial anthropometric features related to CL ± P (mainly in the nasal and mouth area) were measured from 3D facial images. In addition, facial aesthetic ratings were obtained from 16 non-clinical observers for the same 3D facial images using a 7-point Likert scale. Pearson correlation analysis was conducted to find features that were correlated with the panel ratings of observers. Boys with a longer face and nose, or thicker upper and lower lips are considered more attractive than others while girls with a less curved middle face contour are considered more attractive than others. Associated facial landmarks for these features are primary focus areas for reconstructive surgery for CL ± P. This study identified anthropometric measures of facial features of Hispanic/Latino White children that are pertinent to CL ± P and which correlate with the panel attractiveness ratings.

  12. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  13. Modeling first impressions from highly variable facial images.

    PubMed

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  14. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  15. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  16. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  17. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Culture shapes 7-month-olds' perceptual strategies in discriminating facial expressions of emotion.

    PubMed

    Geangu, Elena; Ichikawa, Hiroko; Lao, Junpeng; Kanazawa, So; Yamaguchi, Masami K; Caldara, Roberto; Turati, Chiara

    2016-07-25

    Emotional facial expressions are thought to have evolved because they play a crucial role in species' survival. From infancy, humans develop dedicated neural circuits [1] to exhibit and recognize a variety of facial expressions [2]. But there is increasing evidence that culture specifies when and how certain emotions can be expressed - social norms - and that the mature perceptual mechanisms used to transmit and decode the visual information from emotional signals differ between Western and Eastern adults [3-5]. Specifically, the mouth is more informative for transmitting emotional signals in Westerners and the eye region for Easterners [4], generating culture-specific fixation biases towards these features [5]. During development, it is recognized that cultural differences can be observed at the level of emotional reactivity and regulation [6], and to the culturally dominant modes of attention [7]. Nonetheless, to our knowledge no study has explored whether culture shapes the processing of facial emotional signals early in development. The data we report here show that, by 7 months, infants from both cultures visually discriminate facial expressions of emotion by relying on culturally distinct fixation strategies, resembling those used by the adults from the environment in which they develop [5]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Performance of a Working Face Recognition Machine using Cortical Thought Theory

    DTIC Science & Technology

    1984-12-04

    been considered (2). Recommendations from Bledsoe’s study included research on facial - recognition systems that are "completely automatic (remove the...C. L. Location of some facial features . computer, Palo Alto: Panoramic Research, Aug 1966. 2. Bledsoe, W. W. Man-machine facial recognition : Is...34 image?" It would seem - that the location and size of the features left in this contrast-expanded image contain the essential information of facial

  20. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  1. Does skull shape mediate the relationship between objective features and subjective impressions about the face?

    PubMed

    Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2013-10-01

    In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Improved facial affect recognition in schizophrenia following an emotion intervention, but not training attention-to-facial-features or treatment-as-usual.

    PubMed

    Tsotsi, Stella; Kosmidis, Mary H; Bozikas, Vasilis P

    2017-08-01

    In schizophrenia, impaired facial affect recognition (FAR) has been associated with patients' overall social functioning. Interventions targeting attention or FAR per se have invariably yielded improved FAR performance in these patients. Here, we compared the effects of two interventions, one targeting FAR and one targeting attention-to-facial-features, with treatment-as-usual on patients' FAR performance. Thirty-nine outpatients with schizophrenia were randomly assigned to one of three groups: FAR intervention (training to recognize emotional information, conveyed by changes in facial features), attention-to-facial-features intervention (training to detect changes in facial features), and treatment-as-usual. Also, 24 healthy controls, matched for age and education, were assigned to one of the two interventions. Two FAR measurements, baseline and post-intervention, were conducted using an original experimental procedure with alternative sets of stimuli. We found improved FAR performance following the intervention targeting FAR in comparison to the other patient groups, which in fact was comparable to the pre-intervention performance of healthy controls in the corresponding intervention group. This improvement was more pronounced in recognizing fear. Our findings suggest that compared to interventions targeting attention, and treatment-as-usual, training programs targeting FAR can be more effective in improving FAR in patients with schizophrenia, particularly assisting them in perceiving threat-related information more accurately. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  3. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  4. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    ERIC Educational Resources Information Center

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  5. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan

    2009-07-01

    Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  6. Novel method to predict body weight in children based on age and morphological facial features.

    PubMed

    Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M

    2015-04-01

    A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.

  7. Contributions of individual face features to face discrimination.

    PubMed

    Logan, Andrew J; Gordon, Gael E; Loffler, Gunter

    2017-08-01

    Faces are highly complex stimuli that contain a host of information. Such complexity poses the following questions: (a) do observers exhibit preferences for specific information? (b) how does sensitivity to individual face parts compare? These questions were addressed by quantifying sensitivity to different face features. Discrimination thresholds were determined for synthetic faces under the following conditions: (i) 'full face': all face features visible; (ii) 'isolated feature': single feature presented in isolation; (iii) 'embedded feature': all features visible, but only one feature modified. Mean threshold elevations for isolated features, relative to full-faces, were 0.84x, 1.08, 2.12, 3.34, 4.07 and 4.47 for head-shape, hairline, nose, mouth, eyes and eyebrows respectively. Hence, when two full faces can be discriminated at threshold, the difference between the eyes is about four times less than what is required when discriminating between isolated eyes. In all cases, sensitivity was higher when features were presented in isolation than when they were embedded within a face context (threshold elevations of 0.94x, 1.74, 2.67, 2.90, 5.94 and 9.94). This reveals a specific pattern of sensitivity to face information. Observers are between two and four times more sensitive to external than internal features. The pattern for internal features (higher sensitivity for the nose, compared to mouth, eyes and eyebrows) is consistent with lower sensitivity for those parts affected by facial dynamics (e.g. facial expressions). That isolated features are easier to discriminate than embedded features supports a holistic face processing mechanism which impedes extraction of information about individual features from full faces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  9. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  10. Why 8-Year-Olds Cannot Tell the Difference between Steve Martin and Paul Newman: Factors Contributing to the Slow Development of Sensitivity to the Spacing of Facial Features

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Dobson, Kate S.; Parsons, Julie; Maurer, Daphne

    2004-01-01

    Children are nearly as sensitive as adults to some cues to facial identity (e.g., differences in the shape of internal features and the external contour), but children are much less sensitive to small differences in the spacing of facial features. To identify factors that contribute to this pattern, we compared 8-year-olds' sensitivity to spacing…

  11. Personal Identification of Deceased Persons: An Overview of the Current Methods Based on Physical Appearance.

    PubMed

    Caplova, Zuzana; Obertova, Zuzana; Gibelli, Daniele M; De Angelis, Danilo; Mazzarelli, Debora; Sforza, Chiarella; Cattaneo, Cristina

    2018-05-01

    The use of the physical appearance of the deceased has become more important because the available antemortem information for comparisons may consist only of a physical description and photographs. Twenty-one articles dealing with the identification based on the physiognomic features of the human body were selected for review and were divided into four sections: (i) visual recognition, (ii) specific facial/body areas, (iii) biometrics, and (iv) dental superimposition. While opinions about the reliability of the visual recognition differ, the search showed that it has been used in mass disasters, even without testing its objectivity and reliability. Specific facial areas being explored for the identification of dead; however, their practical use is questioned, similarly to soft biometrics. The emerging dental superimposition seems to be the only standardized and successfully applied method for identification so far. More research is needed into a potential use of the individualizing features, considering that postmortem changes and technical difficulties may affect the identification. © 2017 American Academy of Forensic Sciences.

  12. Down syndrome detection from facial photographs using machine learning techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George

    2013-02-01

    Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.

  13. Folliculotropism in pigmented facial macules: Differential diagnosis with reflectance confocal microscopy.

    PubMed

    Persechino, Flavia; De Carvalho, Nathalie; Ciardo, Silvana; De Pace, Barbara; Casari, Alice; Chester, Johanna; Kaleci, Shaniko; Stanganelli, Ignazio; Longo, Caterina; Farnetani, Francesca; Pellacani, Giovanni

    2018-03-01

    Pigmented facial macules are common on sun damage skin. The diagnosis of early stage lentigo maligna (LM) and lentigo maligna melanoma (LMM) is challenging. Reflectance confocal microscopy (RCM) has been proven to increase diagnostic accuracy of facial lesions. A total of 154 pigmented facial macules, retrospectively collected, were evaluated for the presence of already-described RCM features and new parameters depicting aspects of the follicle. Melanocytic nests, roundish pagetoid cells, follicular infiltration, bulgings from the follicles and many bright dendrites and infiltration of the hair follicle (ie, folliculotropism) were found to be indicative of LM/LMM compared to non-melanocytic skin neoplasms (NMSNs), with an overall sensitivity of 96% and specificity of 83%. Concerning NMSNs, solar lentigo and lichen planus-like keratosis resulted better distinguishable from LM/LMM because usually lacking malignant features and presenting characteristic diagnostic parameters, such as epidermal cobblestone pattern and polycyclic papillary contours. On the other hand, distinction of pigmented actinic keratosis (PAK) resulted more difficult, and needing evaluation of hair follicle infiltration and bulging structures, due to the frequent observation of few bright dendrites in the epidermis, but predominantly not infiltrating the hair follicle (estimated specificity for PAK 53%). A detailed evaluation of the components of the folliculotropism may help to improve the diagnostic accuracy. The classification of the type, distribution and amount of cells, and the presence of bulging around the follicles seem to represent important tools for the differentiation between PAK and LM/LMM at RCM analysis. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  15. Clinical and experimental evidence suggest a link between KIF7 and C5orf42-related ciliopathies through Sonic Hedgehog signaling.

    PubMed

    Asadollahi, Reza; Strauss, Justin E; Zenker, Martin; Beuing, Oliver; Edvardson, Simon; Elpeleg, Orly; Strom, Tim M; Joset, Pascal; Niedrist, Dunja; Otte, Christine; Oneda, Beatrice; Boonsawat, Paranchai; Azzarello-Burri, Silvia; Bartholdi, Deborah; Papik, Michael; Zweier, Markus; Haas, Cordula; Ekici, Arif B; Baumer, Alessandra; Boltshauser, Eugen; Steindl, Katharina; Nothnagel, Michael; Schinzel, Albert; Stoeckli, Esther T; Rauch, Anita

    2018-02-01

    Acrocallosal syndrome (ACLS) is an autosomal recessive neurodevelopmental disorder caused by KIF7 defects and belongs to the heterogeneous group of ciliopathies related to Joubert syndrome (JBTS). While ACLS is characterized by macrocephaly, prominent forehead, depressed nasal bridge, and hypertelorism, facial dysmorphism has not been emphasized in JBTS cohorts with molecular diagnosis. To evaluate the specificity and etiology of ACLS craniofacial features, we performed whole exome or targeted Sanger sequencing in patients with the aforementioned overlapping craniofacial appearance but variable additional ciliopathy features followed by functional studies. We found (likely) pathogenic variants of KIF7 in 5 out of 9 families, including the original ACLS patients, and delineated 1000 to 4000-year-old Swiss founder alleles. Three of the remaining families had (likely) pathogenic variants in the JBTS gene C5orf42, and one patient had a novel de novo frameshift variant in SHH known to cause autosomal dominant holoprosencephaly. In accordance with the patients' craniofacial anomalies, we showed facial midline widening after silencing of C5orf42 in chicken embryos. We further supported the link between KIF7, SHH, and C5orf42 by demonstrating abnormal primary cilia and diminished response to a SHH agonist in fibroblasts of C5orf42-mutated patients, as well as axonal pathfinding errors in C5orf42-silenced chicken embryos similar to those observed after perturbation of Shh signaling. Our findings, therefore, suggest that beside the neurodevelopmental features, macrocephaly and facial widening are likely more general signs of disturbed SHH signaling. Nevertheless, long-term follow-up revealed that C5orf42-mutated patients showed catch-up development and fainting of facial features contrary to KIF7-mutated patients.

  16. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  17. Objective grading of facial paralysis using Local Binary Patterns in video processing.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F

    2008-01-01

    This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  18. Selective Transfer Machine for Personalized Facial Action Unit Detection

    PubMed Central

    Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffery F.

    2014-01-01

    Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classifiers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically influence how well generic classifiers generalize to previously unseen persons. While a possible solution would be to train person-specific classifiers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classifier in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific biases. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classifiers and to cross-domain learning methods in three major databases: CK+ [20], GEMEP-FERA [32] and RU-FACS [2]. STM outperformed generic classifiers in all. PMID:25242877

  19. Long-term assessment of facial features and functions needing more attention in treatment of Treacher Collins syndrome.

    PubMed

    Plomp, Raul G; Versnel, Sarah L; van Lieshout, Manouk J S; Poublon, Rene M L; Mathijssen, Irene M J

    2013-08-01

    This study aimed to determine which facial features and functions need more attention during surgical treatment of Treacher Collins syndrome (TCS) in the long term. A cross-sectional cohort study was conducted to compare 23 TCS patients with 206 controls (all≥18 years) regarding satisfaction with their face. The adjusted Body Cathexis Scale was used to determine satisfaction with the appearance of the different facial features and functions. Desire for further treatment of these items was questioned. For each patient an overview was made of all facial operations performed, the affected facial features and the objective severity of the facial deformities. Patients were least satisfied with the appearance of the ears, facial profile and eyelids and with the functions hearing and nasal patency (P<0.001). Residual deformity of the reconstructed facial areas remained a problem in mainly the orbital area. The desire for further treatment and dissatisfaction was high in the operated patients, predominantly for eyelid reconstructions. Another significant wish was for improvement of hearing. In patients with TCS, functional deficits of the face are shown to be as important as the facial appearance. Particularly nasal patency and hearing are frequently impaired and require routine screening and treatment from intake onwards. Furthermore, correction of ear deformities and midface hypoplasia should be offered and performed more frequently. Residual deformity and dissatisfaction remains a problem, especially in reconstructed eyelids. II. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. Minor physical anomalies and craniofacial measures in patients with treatment-resistant schizophrenia.

    PubMed

    Lin, A-S; Chang, S-S; Lin, S-H; Peng, Y-C; Hwu, H-G; Chen, W J

    2015-07-01

    Schizophrenia patients have higher rates of minor physical anomalies (MPAs) than controls, particularly in the craniofacial region; this difference lends support to the neurodevelopmental model of schizophrenia. Whether MPAs are associated with treatment response in schizophrenia remains unknown. The aim of this case-control study was to investigate whether more MPAs and specific quantitative craniofacial features in patients with schizophrenia are associated with operationally defined treatment resistance. A comprehensive scale, consisting of both qualitatively measured MPAs and quantitative measurements of the head and face, was applied in 108 patients with treatment-resistant schizophrenia (TRS) and in 104 non-TRS patients. Treatment resistance was determined according to the criteria proposed by Conley & Kelly (2001; Biological Psychiatry 50, 898-911). Our results revealed that patients with TRS had higher MPA scores in the mouth region than non-TRS patients, and the two groups also differed in four quantitative measurements (facial width, lower facial height, facial height, and length of the philtrum), after controlling for multiple comparisons using the false discovery rate. Among these dysmorphological measurements, three MPA item types (mouth MPA score, facial width, and lower facial height) and earlier disease onset were further demonstrated to have good discriminant validity in distinguishing TRS from non-TRS patients in a multivariable logistic regression analysis, with an area under the curve of 0.84 and a generalized R 2 of 0.32. These findings suggest that certain MPAs and craniofacial features may serve as useful markers for identifying TRS at early stages of the illness.

  1. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  2. Meta-Analysis of Facial Emotion Recognition in Behavioral Variant Frontotemporal Dementia: Comparison With Alzheimer Disease and Healthy Controls.

    PubMed

    Bora, Emre; Velakoulis, Dennis; Walterfang, Mark

    2016-07-01

    Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.

  3. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Spoofing detection on facial images recognition using LBP and GLCM combination

    NASA Astrophysics Data System (ADS)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  5. Stickler Syndrome

    MedlinePlus

    ... Children who have Stickler syndrome often have distinctive facial features — prominent eyes, a small nose with a scooped ... develop ear infections than are children with normal facial features. Deafness. Hearing loss may worsen with time and ...

  6. Combining facial dynamics with appearance for age estimation.

    PubMed

    Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo

    2015-06-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.

  7. Modeling first impressions from highly variable facial images

    PubMed Central

    Vernon, Richard J. W.; Sutherland, Clare A. M.; Young, Andrew W.; Hartley, Tom

    2014-01-01

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable “ambient” face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters’ impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features. PMID:25071197

  8. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    PubMed

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Facial contrast is a cue for perceiving health from the face.

    PubMed

    Russell, Richard; Porcheron, Aurélie; Sweda, Jennifer R; Jones, Alex L; Mauger, Emmanuelle; Morizot, Frederique

    2016-09-01

    How healthy someone appears has important social consequences. Yet the visual cues that determine perceived health remain poorly understood. Here we report evidence that facial contrast-the luminance and color contrast between internal facial features and the surrounding skin-is a cue for the perception of health from the face. Facial contrast was measured from a large sample of Caucasian female faces, and was found to predict ratings of perceived health. Most aspects of facial contrast were positively related to perceived health, meaning that faces with higher facial contrast appeared healthier. In 2 subsequent experiments, we manipulated facial contrast and found that participants perceived faces with increased facial contrast as appearing healthier than faces with decreased facial contrast. These results support the idea that facial contrast is a cue for perceived health. This finding adds to the growing knowledge about perceived health from the face, and helps to ground our understanding of perceived health in terms of lower-level perceptual features such as contrast. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    PubMed

    Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick

    2013-01-01

    Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  11. Tuning the developing brain to social signals of emotions

    PubMed Central

    Leppänen, Jukka M.; Nelson, Charles A.

    2010-01-01

    PREFACE Humans in diverse cultures develop a similar capacity to recognize the emotional signals of different facial expressions. This capacity is mediated by a brain network that involves emotion-related brain circuits and higher-level visual representation areas. Recent studies suggest that the key components of this network begin to emerge early in life. The studies also suggest that initial biases in emotion-related brain circuits and the early coupling of these circuits and cortical perceptual areas provides a foundation for a rapid acquisition of representations of those facial features that denote specific emotions. PMID:19050711

  12. Selective attention to a facial feature with and without facial context: an ERP-study.

    PubMed

    Wijers, A A; Van Besouw, N J P; Mulder, G

    2002-04-01

    The present experiment addressed the question whether selectively attending to a facial feature (mouth shape) would benefit from the presence of a correct facial context. Subjects attended selectively to one of two possible mouth shapes belonging to photographs of a face with a happy or sad expression, respectively. These mouths were presented randomly either in isolation, embedded in the original photos, or in an exchanged facial context. The ERP effect of attending mouth shape was a lateral posterior negativity, anterior positivity with an onset latency of 160-200 ms; this effect was completely unaffected by the type of facial context. When the mouth shape and the facial context conflicted, this resulted in a medial parieto-occipital positivity with an onset latency of 180 ms, independent of the relevance of the mouth shape. Finally, there was a late (onset at approx. 400 ms) expression (happy vs. sad) effect, which was strongly lateralized to the right posterior hemisphere and was most prominent for attended stimuli in the correct facial context. For the isolated mouth stimuli, a similarly distributed expression effect was observed at an earlier latency range (180-240 ms). These data suggest the existence of separate, independent and neuroanatomically segregated processors engaged in the selective processing of facial features and the detection of contextual congruence and emotional expression of face stimuli. The data do not support that early selective attention processes benefit from top-down constraints provided by the correct facial context.

  13. Facial Paralysis in Patients With Hemifacial Microsomia: Frequency, Distribution, and Association With Other OMENS Abnormalities.

    PubMed

    Li, Qiang; Zhou, Xu; Wang, Yue; Qian, Jin; Zhang, Qingguo

    2018-05-15

    Although facial paralysis is a fundamental feature of hemifacial microsomia, the frequency and distribution of nerve abnormalities in patients with hemifacial microsomia remain unclear. In this study, the authors classified 1125 cases with microtia (including 339 patients with hemifacial microsomia and 786 with isolated microtia) according to Orbital Distortion Mandibular Hypoplasia Ear Anomaly Nerve Involvement Soft Tissue Dependency (OMENS) scheme. Then, the authors performed an independent analysis to describe the distribution feature of nerve abnormalities and reveal the possible relationships between facial paralysis and the other 4 fundamental features in the OMENS system. Results revealed that facial paralysis is present 23.9% of patients with hemifacial microsomia. The frontal-temporal branch is the most vulnerable branch in the total 1125 cases with microtia. The occurrence of facial paralysis is positively correlated with mandibular hypoplasia and soft tissue deficiency both in the total 1125 cases and the hemifacial microsomia patients. Orbital asymmetry is related to facial paralysis only in the total microtia cases, and ear deformity is related to facial paralysis only in hemifacial microsomia patients. No significant association was found between the severity of facial paralysis and any of the other 4 OMENS anomalies. These data suggest that the occurrence of facial paralysis may be associated with other OMENS abnormalities. The presence of serious mandibular hypoplasia or soft tissue deficiency should alert the clinician to a high possibility but not a high severity of facial paralysis.

  14. Facial expression recognition based on improved deep belief networks

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  15. Evaluation of facial expression in acute pain in cats.

    PubMed

    Holden, E; Calvo, G; Collins, M; Bell, A; Reid, J; Scott, E M; Nolan, A M

    2014-12-01

    To describe the development of a facial expression tool differentiating pain-free cats from those in acute pain. Observers shown facial images from painful and pain-free cats were asked to identify if they were in pain or not. From facial images, anatomical landmarks were identified and distances between these were mapped. Selected distances underwent statistical analysis to identify features discriminating pain-free and painful cats. Additionally, thumbnail photographs were reviewed by two experts to identify discriminating facial features between the groups. Observers (n = 68) had difficulty in identifying pain-free from painful cats, with only 13% of observers being able to discriminate more than 80% of painful cats. Analysis of 78 facial landmarks and 80 distances identified six significant factors differentiating pain-free and painful faces including ear position and areas around the mouth/muzzle. Standardised mouth and ear distances when combined showed excellent discrimination properties, correctly differentiating pain-free and painful cats in 98% of cases. Expert review supported these findings and a cartoon-type picture scale was developed from thumbnail images. Initial investigation into facial features of painful and pain-free cats suggests potentially good discrimination properties of facial images. Further testing is required for development of a clinical tool. © 2014 British Small Animal Veterinary Association.

  16. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    PubMed

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  17. Body Emotion Recognition Disproportionately Depends on Vertical Orientations during Childhood

    ERIC Educational Resources Information Center

    Balas, Benjamin; Auen, Amanda; Saville, Alyson; Schmidt, Jamie

    2018-01-01

    Children's ability to recognize emotional expressions from faces and bodies develops during childhood. However, the low-level features that support accurate body emotion recognition during development have not been well characterized. This is in marked contrast to facial emotion recognition, which is known to depend upon specific spatial frequency…

  18. Assessment of the facial features and chin development of fetuses with use of serial three-dimensional sonography and the mandibular size monogram in a Chinese population.

    PubMed

    Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao

    2004-02-01

    Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (P<.001) statistical analysis suggest that the fetuses develop wider chins and broader facial features in later weeks. The serial 3D sonography and mandibular size monogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.

  19. A geometric morphometric study of regional differences in the ontogeny of the modern human facial skeleton.

    PubMed

    Vioarsdóttir, Una Strand; O'Higgins, Paul; Stringer, Chris

    2002-09-01

    This study examines interpopulation variations in the facial skeleton of 10 modern human populations and places these in an ontogenetic perspective. It aims to establish the extent to which the distinctive features of adult representatives of these populations are present in the early post natal period and to what extent population differences in ontogenetic scaling and allometric trajectories contribute to distinct facial forms. The analyses utilize configurations of facial landmarks and are carried out using geometric morphometric methods. The results of this study show that modern human populations can be distinguished based on facial shape alone, irrespective of age or sex, indicating the early presence of differences. Additionally, some populations have statistically distinct facial ontogenetic trajectories that lead to the development of further differences later in ontogeny. We conclude that population-specific facial morphologies develop principally through distinctions in facial shape probably already present at birth and further accentuated and modified to variable degrees during growth. These findings raise interesting questions regarding the plasticity of facial growth patterns in modern humans. Further, they have important implications in relation to the study of growth in the face of fossil hominins and in relation to the possibility of developing effective discriminant functions for the identification of population affinities of immature facial skeletal material. Such tools would be of value in archaeological, forensic and anthropological applications. The findings of this study underline the need to examine more deeply, and in more detail, the ontogenetic basis of other causes of craniometric variation, such as sexual dimorphism and hominin species differentiation.

  20. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  1. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  2. A de novo 11q23 deletion in a patient presenting with severe ophthalmologic findings, psychomotor retardation and facial dysmorphism.

    PubMed

    Şimşek-Kiper, Pelin Özlem; Bayram, Yavuz; Ütine, Gülen Eda; Alanay, Yasemin; Boduroğlu, Koray

    2014-01-01

    Distal 11q deletion, previously known as Jacobsen syndrome, is caused by segmental aneusomy for the distal end of the long arm of chromosome 11. Typical clinical features include facial dysmorphism, mild-to-moderate psychomotor retardation, trigonocephaly, cardiac defects, and thrombocytopenia. There is a significant variability in the range of clinical features. We report herein a five-year-old girl with severe ophthalmological findings, facial dysmorphism, and psychomotor retardation with normal platelet function, in whom a de novo 11q23 deletion was detected, suggesting that distal 11q monosomy should be kept in mind in patients presenting with dysmorphic facial features and psychomotor retardation even in the absence of hematological findings.

  3. Effective Heart Disease Detection Based on Quantitative Computerized Traditional Chinese Medicine Using Representation Based Classifiers.

    PubMed

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.

  4. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  5. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  6. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  7. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    PubMed

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  8. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    PubMed

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P < 0.005; UF 1961 ± 1779mm, P = 0.002; LF 1608 ± 1404mm, P = 0.002; HM 849 ± 642mm, P < 0.001). UF movement was greater than LF movement at all exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P < 0.001). Significant medium to large non-linear relationships were found between facial movement and power output (r 2  = 0.24-0.31), HR (r 2  = 0.26-0.33), [La - ] (r 2  = 0.33-0.44) and RPE (r 2  = 0.38-0.45). The findings demonstrate the potential utility of facial feature tracking as a non-invasive, psychophysiological measure to potentially assess exercise intensity.

  9. Facial emotion perception impairments in schizophrenia patients with comorbid antisocial personality disorder.

    PubMed

    Tang, Dorothy Y Y; Liu, Amy C Y; Lui, Simon S Y; Lam, Bess Y H; Siu, Bonnie W M; Lee, Tatia M C; Cheung, Eric F C

    2016-02-28

    Impairment in facial emotion perception is believed to be associated with aggression. Schizophrenia patients with antisocial features are more impaired in facial emotion perception than their counterparts without these features. However, previous studies did not define the comorbidity of antisocial personality disorder (ASPD) using stringent criteria. We recruited 30 participants with dual diagnoses of ASPD and schizophrenia, 30 participants with schizophrenia and 30 controls. We employed the Facial Emotional Recognition paradigm to measure facial emotion perception, and administered a battery of neurocognitive tests. The Life History of Aggression scale was used. ANOVAs and ANCOVAs were conducted to examine group differences in facial emotion perception, and control for the effect of other neurocognitive dysfunctions on facial emotion perception. Correlational analyses were conducted to examine the association between facial emotion perception and aggression. Patients with dual diagnoses performed worst in facial emotion perception among the three groups. The group differences in facial emotion perception remained significant, even after other neurocognitive impairments were controlled for. Severity of aggression was correlated with impairment in perceiving negative-valenced facial emotions in patients with dual diagnoses. Our findings support the presence of facial emotion perception impairment and its association with aggression in schizophrenia patients with comorbid ASPD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  11. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  12. Facial emotion recognition and borderline personality pathology.

    PubMed

    Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio

    2017-09-01

    The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  13. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  14. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  15. Selective Transfer Machine for Personalized Facial Expression Analysis

    PubMed Central

    Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.

    2017-01-01

    Automatic facial action unit (AU) and expression detection from videos is a long-standing problem. The problem is challenging in part because classifiers must generalize to previously unknown subjects that differ markedly in behavior and facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) from those on which the classifiers are trained. While some progress has been achieved through improvements in choices of features and classifiers, the challenge occasioned by individual differences among people remains. Person-specific classifiers would be a possible solution but for a paucity of training data. Sufficient training data for person-specific classifiers typically is unavailable. This paper addresses the problem of how to personalize a generic classifier without additional labels from the test subject. We propose a transductive learning method, which we refer as a Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific mismatches. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. We compared STM to both generic classifiers and cross-domain learning methods on four benchmarks: CK+ [44], GEMEP-FERA [67], RU-FACS [4] and GFT [57]. STM outperformed generic classifiers in all. PMID:28113267

  16. Phenotype-genotype correlation in potential female carriers of X-linked developmental cataract (Nance-Horan syndrome).

    PubMed

    Khan, Arif O; Aldahmesh, Mohammed A; Mohamed, Jawahir Y; Alkuraya, Fowzan S

    2012-06-01

    To correlate clinical examination with underlying genotype in asymptomatic females who are potential carriers of X-linked developmental cataract (Nance-Horan syndrome). An ophthalmologist blind to the pedigree performed comprehensive ophthalmic examination for 16 available family members (two affected and six asymptomatic females, five affected and three asymptomatic males). Facial features were also noted. Venous blood was collected for sequencing of the gene NHS. All seven affected family members had congenital or infantile cataract and facial dysmorphism (long face, bulbous nose, abnormal dentition). The six asymptomatic females ranged in age from 4-35 years old. Four had posterior Y-suture centered lens opacities; these four also exhibited the facial dysmorphism of the seven affected family members. The fifth asymptomatic girl had scattered fine punctate lens opacities (not centered on the Y-suture) while the sixth had clear lenses, and neither exhibited the facial dysmorphism. A novel NHS mutation (p.Lys744AsnfsX15 [c.2232delG]) was found in the seven patients with congenital or infantile cataract. This mutation was also present in the four asymptomatic girls with Y-centered lens opacities but not in the other two asymptomatic girls or in the three asymptomatic males (who had clear lenses). Lens opacities centered around the posterior Y-suture in the context of certain facial features were sensitive and specific clinical signs of carrier status for NHS mutation in asymptomatic females. Lens opacities that did not have this characteristic morphology in a suspected female carrier were not a carrier sign, even in the context of her affected family members.

  17. The aging African-American face.

    PubMed

    Brissett, Anthony E; Naylor, Michelle C

    2010-05-01

    With the desire to create a more youthful appearance, patients of all races and ethnicities are increasingly seeking nonsurgical and surgical rejuvenation. In particular, facial rejuvenation procedures have grown significantly within the African-American population. This increase has resulted in a paradigm shift in facial plastic surgery as one considers rejuvenation procedures in those of African descent, as the aging process of various racial groups differs from traditional models. The purpose of this article is to draw attention to the facial features unique to those of African descent and the role these features play in the aging process, taking care to highlight the differences from traditional models of facial aging. In addition, this article will briefly describe the nonsurgical and surgical options for facial rejuvenation taking into consideration the previously discussed facial aging differences and postoperative considerations. Thieme Medical Publishers.

  18. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  19. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  20. A Report of Two Cases of Solid Facial Edema in Acne.

    PubMed

    Kuhn-Régnier, Sarah; Mangana, Joanna; Kerl, Katrin; Kamarachev, Jivko; French, Lars E; Cozzio, Antonio; Navarini, Alexander A

    2017-03-01

    Solid facial edema (SFE) is a rare complication of acne vulgaris. To examine the clinical features of acne patients with solid facial edema, and to give an overview on the outcome of previous topical and systemic treatments in the cases so far published. We report two cases from Switzerland, both young men with initially papulopustular acne resistant to topical retinoids. Both cases responded to oral isotretinoin, in one case combined with oral steroids. Our cases show a strikingly similar clinical appearance to the cases described by Connelly and Winkelmann in 1985 (Connelly MG, Winkelmann RK. Solid facial edema as a complication of acne vulgaris. Arch Dermatol. 1985;121(1):87), as well as to cases of Morbihan's disease that occurs as a rare complication of rosacea. Even 30 years after, the cause of the edema remains unknown. In two of the original four cases, a potential triggering factor was identified such as facial trauma or insect bites; however, our two patients did not report such occurrencies. The rare cases of solid facial edema in both acne and rosacea might hold the key to understanding the specific inflammatory pattern that creates both persisting inflammation and disturbed fluid homeostasis which can occur as a slightly different presentation in dermatomyositis, angioedema, Heerfordt's syndrome and other conditions.

  1. Influence of gravity upon some facial signs.

    PubMed

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  2. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  3. Evaluation of appearance transfer and persistence in central face transplantation: a computer simulation analysis.

    PubMed

    Pomahac, Bohdan; Aflaki, Pejman; Nelson, Charles; Balas, Benjamin

    2010-05-01

    Partial facial allotransplantation is an emerging option in reconstruction of central facial defects, providing function and aesthetic appearance. Ethical debate partly stems from uncertainty surrounding identity aspects of the procedure. There is no objective evidence regarding the effect of donors' transplanted facial structures on appearance change of the recipients and its influence on facial recognition of donors and recipients. Full-face frontal view color photographs of 100 volunteers were taken at a distance of 150 cm with a digital camera (Nikon/DX80). Photographs were taken in front of a blue background, and with a neutral facial expression. Using image-editing software (Adobe-Photoshop-CS3), central facial transplantation was performed between participants. Twenty observers performed a familiar 'facial recognition task', to identify 40 post-transplant composite faces presented individually on the screen at a viewing distance of 60 cm, with an exposure time of 5s. Each composite face comprised of a familiar and an unfamiliar face to the observers. Trials were done with and without external facial features (head contour, hair and ears). Two variables were defined: 'Appearance Transfer' refers to transfer of donor's appearance to the recipient. 'Appearance Persistence' deals with the extent of recipient's appearance change post-transplantation. A t-test was run to determine if the rates of Appearance Transfer differed from Appearance Persistence. Average Appearance Transfer rate (2.6%) was significantly lower than Appearance Persistence rate (66%) (P<0.001), indicating that donor's appearance transfer to the recipient is negligible, whereas recipients will be identified the majority of the time. External facial features were important in facial recognition of recipients, evidenced by a significant rise in Appearance Persistence from 19% in the absence of external features to 66% when those features were present (P<0.01). This study may be helpful in the informed consent process of prospective recipients. It is beneficial for education of donors families and is expected to positively affect their decision to consent for facial tissue donation. Copyright (c) 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  5. Associations among facial masculinity, physical strength, fluctuating asymmetry and attractiveness in young men and women.

    PubMed

    Van Dongen, Stefan

    2014-01-01

    Studies of the process of human mate selection and attractiveness have assumed that selection favours morphological features that correlate with (genetic) quality. Degree of masculinity/femininity and fluctuating asymmetry (FA) may signal (genetic) quality, but what information they harboured and how they relate to fitness is still debated. To study strength of associations between facial masculinity/femininity, facial FA, attractiveness and physical strength in humans. Two-hundred young males and females were studied by measuring facial asymmetry and masculinity on the basis of frontal photographs. Attractiveness was determined on the basis of scores given by an anonymous panel, and physical strength using hand grip strength. Patterns differed markedly between males and females and analysis method used (univariate vs multivariate). Overall, no associations between FA and attractiveness, masculinity and physical strength were found. In females, but not males, masculinity and attractiveness correlated negatively and masculinity and physical strength correlated positively. Further research into the differences between males and females in associations between facial morphology, attractiveness and physical strength is clearly needed. The use of a multivariate approach can increase our understanding of which regions of the face harbour specific information of hormone levels and perhaps behavioural traits.

  6. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  7. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  8. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  9. Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).

    PubMed

    Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M

    2017-02-01

    Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  11. Familiarity effects in the construction of facial-composite images using modern software systems.

    PubMed

    Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B

    2011-12-01

    We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems.

  12. Using Event Related Potentials to Explore Stages of Facial Affect Recognition Deficits in Schizophrenia

    PubMed Central

    Wynn, Jonathan K.; Lee, Junghee; Horan, William P.; Green, Michael F.

    2008-01-01

    Schizophrenia patients show impairments in identifying facial affect; however, it is not known at what stage facial affect processing is impaired. We evaluated 3 event-related potentials (ERPs) to explore stages of facial affect processing in schizophrenia patients. Twenty-six schizophrenia patients and 27 normal controls participated. In separate blocks, subjects identified the gender of a face, the emotion of a face, or if a building had 1 or 2 stories. Three ERPs were examined: (1) P100 to examine basic visual processing, (2) N170 to examine facial feature encoding, and (3) N250 to examine affect decoding. Behavioral performance on each task was also measured. Results showed that schizophrenia patients’ P100 was comparable to the controls during all 3 identification tasks. Both patients and controls exhibited a comparable N170 that was largest during processing of faces and smallest during processing of buildings. For both groups, the N250 was largest during the emotion identification task and smallest for the building identification task. However, the patients produced a smaller N250 compared with the controls across the 3 tasks. The groups did not differ in behavioral performance in any of the 3 identification tasks. The pattern of intact P100 and N170 suggest that patients maintain basic visual processing and facial feature encoding abilities. The abnormal N250 suggests that schizophrenia patients are less efficient at decoding facial affect features. Our results imply that abnormalities in the later stage of feature decoding could potentially underlie emotion identification deficits in schizophrenia. PMID:18499704

  13. Chronic neuropathic facial pain after intense pulsed light hair removal. Clinical features and pharmacological management.

    PubMed

    Gay-Escoda, Cosme; Párraga-Manzol, Gabriela; Sánchez-Torres, Alba; Moreno-Arias, Gerardo

    2015-10-01

    Intense Pulsed Light (IPL) photodepilation is usually performed as a hair removal method. The treatment is recommended to be indicated by a physician, depending on each patient and on its characteristics. However, the use of laser devices by medical laypersons is frequent and it can suppose a risk of damage for the patients. Most side effects associated to IPL photodepilation are transient, minimal and disappear without sequelae. However, permanent side effects can occur. Some of the complications are laser related but many of them are caused by an operator error or mismanagement. In this work, we report a clinical case of a patient that developed a chronic neuropathic facial pain following IPL hair removal for unwanted hair in the upper lip. The specific diagnosis was painful post-traumatic trigeminal neuropathy, reference 13.1.2.3 according to the International Headache Society (IHS). Neuropathic facial pain, photodepilation, intense pulse light.

  14. [Petrous plasmacytoma revealed by a painful peripheral facial palsy].

    PubMed

    Lagarde, J; Cret, C; Karlin, L; Ameri, A

    2011-01-01

    The classical hypothesis of Bell's palsy, tempting in cases of peripheral facial palsy of rapid onset, must nevertheless be evoked with caution particularly if an intense pain is present, which should lead to search for a tumor of the skull base, especially the petrous bone. A 43-year-old man presented a peripheral facial palsy of rapidly progressive onset. A petrous bone tumor was diagnosed on the CT scan, which revealed an aspect of a glomic tumor or a metastatic lesion. The final histological diagnosis was plasmacytoma. This type of tumor has been rarely reported in this location. The radiological features are not specific at all, underlying the importance of searching for some associated signs such as a monoclonal protein and performing a histological examination when the firm diagnosis of a systemic disease like multiple myeloma has not been possible. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  15. [Advances in the research of pressure therapy for pediatric burn patients with facial scar].

    PubMed

    Wei, Y T; Fu, J F; Li-Tsang, Z H P

    2017-05-20

    Facial scar and deformation caused by burn injury severely affect physical and psychological well-being of pediatric burn patients, which needs medical workers and pediatric burn patients' family members to pay much attention to and to perform early rehabilitation treatment. Pressure therapy is an important rehabilitative strategy for pediatric burn patients with facial scar, mainly including wearing headgears and transparent pressure facemasks, which have their own features. To achieve better treatment results, pressure therapy should be chosen according to specific condition of pediatric burn patients and combined with other assistant therapies. Successful rehabilitation for pediatric burn patients relies on cooperation of both family members of pediatric burn patients and society. Rehabilitation knowledge should be provided to parents of pediatric burn patients to acquire their full support and cooperation in order to achieve best therapeutic effects and ultimately to rebuild physical and psychological well-being of pediatric burn patients.

  16. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    PubMed

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection.

  17. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    PubMed Central

    Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection. PMID:21991328

  18. Principal component analysis of three-dimensional face shape: Identifying shape features that change with age.

    PubMed

    Kurosumi, M; Mizukoshi, K

    2018-05-01

    The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. An audiovisual emotion recognition system

    NASA Astrophysics Data System (ADS)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  20. Common and Unique Impairments in Facial-Expression Recognition in Pervasive Developmental Disorder-Not Otherwise Specified and Asperger's Disorder

    ERIC Educational Resources Information Center

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2013-01-01

    This study was designed to identify specific difficulties and associated features related to the problems with social interaction experienced by individuals with pervasive developmental disorder-not otherwise specified (PDD-NOS) using an emotion-recognition task. We compared individuals with PDD-NOS or Asperger's disorder (ASP) and typically…

  1. Consensus on Current Injectable Treatment Strategies in the Asian Face.

    PubMed

    Wu, Woffles T L; Liew, Steven; Chan, Henry H; Ho, Wilson W S; Supapannachart, Nantapat; Lee, Hong-Ki; Prasetyo, Adri; Yu, Jonathan Nevin; Rogers, John D

    2016-04-01

    The desire for and use of nonsurgical injectable esthetic facial treatments are increasing in Asia. The structural and anatomical features specific to the Asian face, and differences from Western populations in facial aging, necessitate unique esthetic treatment strategies, but published recommendations and clinical evidence for injectable treatments in Asians are scarce. The Asian Facial Aesthetics Expert Consensus Group met to discuss current practices and consensus opinions on the cosmetic use of botulinum toxin and hyaluronic acid (HA) fillers, alone and in combination, for facial applications in Southeastern and Eastern Asians. Consensus opinions and statements on treatment aims and current practice were developed following discussions regarding pre-meeting and meeting survey outcomes, peer-reviewed literature, and the experts' clinical experience. The indications and patterns of use of injectable treatments vary among patients of different ages, and among Asian countries. The combination use of botulinum toxin and fillers increases as patients age. Treatment aims in Asians and current practice regarding the use of botulinum toxin and HA fillers in the upper, middle, and lower face of patients aged 18 to >55 years are presented. In younger Asian patients, addressing proportion and structural features and deficiencies are important to achieve desired esthetic outcomes. In older patients, maintaining facial structure and volume and addressing lines and folds are essential to reduce the appearance of aging. This paper provides guidance on treatment strategies to address the complex esthetic requirements in Asian patients of all ages. This journal requires that the authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.

  2. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  3. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  4. Facial approximation-from facial reconstruction synonym to face prediction paradigm.

    PubMed

    Stephan, Carl N

    2015-05-01

    Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.

  5. Neural correlates of processing facial identity based on features versus their spacing.

    PubMed

    Maurer, D; O'Craven, K M; Le Grand, R; Mondloch, C J; Springer, M V; Lewis, T L; Grady, C L

    2007-04-08

    Adults' expertise in recognizing facial identity involves encoding subtle differences among faces in the shape of individual facial features (featural processing) and in the spacing among features (a type of configural processing called sensitivity to second-order relations). We used fMRI to investigate the neural mechanisms that differentiate these two types of processing. Participants made same/different judgments about pairs of faces that differed only in the shape of the eyes and mouth, with minimal differences in spacing (featural blocks), or pairs of faces that had identical features but differed in the positions of those features (spacing blocks). From a localizer scan with faces, objects, and houses, we identified regions with comparatively more activity for faces, including the fusiform face area (FFA) in the right fusiform gyrus, other extrastriate regions, and prefrontal cortices. Contrasts between the featural and spacing conditions revealed distributed patterns of activity differentiating the two conditions. A region of the right fusiform gyrus (near but not overlapping the localized FFA) showed greater activity during the spacing task, along with multiple areas of right frontal cortex, whereas left prefrontal activity increased for featural processing. These patterns of activity were not related to differences in performance between the two tasks. The results indicate that the processing of facial features is distinct from the processing of second-order relations in faces, and that these functions are mediated by separate and lateralized networks involving the right fusiform gyrus, although the FFA as defined from a localizer scan is not differentially involved.

  6. Bell's Palsy.

    PubMed

    Reich, Stephen G

    2017-04-01

    Bell's palsy is a common outpatient problem, and while the diagnosis is usually straightforward, a number of diagnostic pitfalls can occur, and a lengthy differential diagnosis exists. Recognition and management of Bell's palsy relies on knowledge of the anatomy and function of the various motor and nonmotor components of the facial nerve. Avoiding diagnostic pitfalls relies on recognizing red flags or features atypical for Bell's palsy, suggesting an alternative cause of peripheral facial palsy. The first American Academy of Neurology (AAN) evidence-based review on the treatment of Bell's palsy in 2001 concluded that corticosteroids were probably effective and that the antiviral acyclovir was possibly effective in increasing the likelihood of a complete recovery from Bell's palsy. Subsequent studies led to a revision of these recommendations in the 2012 evidence-based review, concluding that corticosteroids, when used shortly after the onset of Bell's palsy, were "highly likely" to increase the probability of recovery of facial weakness and should be offered; the addition of an antiviral to steroids may increase the likelihood of recovery but, if so, only by a very modest effect. Bell's palsy is characterized by the spontaneous acute onset of unilateral peripheral facial paresis or palsy in isolation, meaning that no features from the history, neurologic examination, or head and neck examination suggest a specific or alternative cause. In this setting, no further testing is necessary. Even without treatment, the outcome of Bell's palsy is favorable, but treatment with corticosteroids significantly increases the likelihood of improvement.

  7. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  8. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    PubMed

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  9. Odor Valence Linearly Modulates Attractiveness, but Not Age Assessment, of Invariant Facial Features in a Memory-Based Rating Task

    PubMed Central

    Seubert, Janina; Gregory, Kristen M.; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N.

    2014-01-01

    Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks – one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task. PMID:24874703

  10. The Eyes Have It: Young Children's Discrimination of Age in Masked and Unmasked Facial Photographs.

    ERIC Educational Resources Information Center

    Jones, Gillian; Smith, Peter K.

    1984-01-01

    Investigates preschool children's ability (n = 30) to discriminate age, and subject's use of different facial areas in ranking facial photographs into age order. Results indicate subjects from 3 to 9 years can successfully rank the photos. Compared with other facial features, the eye region was most important for success in the age ranking task.…

  11. Vascular Leiomyoma and Geniculate Ganglion

    PubMed Central

    Magliulo, Giuseppe; Iannella, Giannicola; Valente, Michele; Greco, Antonio; Appiani, Mario Ciniglio

    2013-01-01

    Objectives Discussion of a rare case of angioleiomyoma involving the geniculate ganglion and the intratemporal facial nerve segment and its surgical treatment. Design Case report. Setting Presence of an expansive lesion englobing the geniculate ganglion without any lesion to the cerebellopontine angle. Participants A 45-year-old man with a grade III facial paralysis according to the House-Brackmann scale of evaluation. Main Outcomes Measure Surgical pathology, radiologic appearance, histological features, and postoperative facial function. Results Removal of the entire lesion was achieved, preserving the anatomic integrity of the nerve; no nerve graft was necessary. Postoperative histology and immunohistochemical studies revealed features indicative of solid vascular leiomyoma. Conclusion Angioleiomyoma should be considered in the differential diagnosis of geniculate ganglion lesions. Optimal postoperative facial function is possible only by preserving the anatomical and functional integrity of the facial nerve. PMID:23943721

  12. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Morphological Integration of Soft-Tissue Facial Morphology in Down Syndrome and Siblings

    PubMed Central

    Starbuck, John; Reeves, Roger H.; Richtsmeier, Joan

    2011-01-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6–12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. PMID:21996933

  14. Morphological integration of soft-tissue facial morphology in Down Syndrome and siblings.

    PubMed

    Starbuck, John; Reeves, Roger H; Richtsmeier, Joan

    2011-12-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6-12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. 2011 Wiley Periodicals, Inc.

  15. Intact Rapid Facial Mimicry as well as Generally Reduced Mimic Responses in Stable Schizophrenia Patients

    PubMed Central

    Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute

    2016-01-01

    Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335

  16. The extraction and use of facial features in low bit-rate visual communication.

    PubMed

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  17. Facial dysmorphism in Leigh syndrome with SURF-1 mutation and COX deficiency.

    PubMed

    Yüksel, Adnan; Seven, Mehmet; Cetincelik, Umran; Yeşil, Gözde; Köksal, Vedat

    2006-06-01

    Leigh syndrome is an inherited, progressive neurodegenerative disorder of infancy and childhood. Mutations in the nuclear SURF-1 gene are specifically associated with cytochrome C oxidase-deficient Leigh syndrome. This report describes two patients with similar facial features. One of them was a 2(1/2)-year-old male, and the other was a 3-year-old male with a mutation in SURF-1 gene and facial dysmorphism including frontal bossing, brachycephaly, hypertrichosis, lateral displacement of inner canthi, esotropia, maxillary hypoplasia, hypertrophic gums, irregularly placed teeth, upturned nostril, low-set big ears, and retrognathi. The first patient's magnetic resonance imaging at 15 months of age indicated mild symmetric T2 prolongation involving the subthalamic nuclei. His second magnetic resonance imaging at 2 years old revealed a symmetric T2 prolongation involving the subthalamic nuclei, substantia nigra, and medulla lesions. In the second child, at the age of 2 the first magnetic resonance imaging documented heavy brainstem and subthalamic nuclei involvement. A second magnetic resonance imaging, performed when he was 3 years old, revealed diffuse involvement of the substantia nigra and hyperintense lesions of the central tegmental tract in addition to previous lesions. Facial dysmorphism and magnetic resonance imaging findings, observed in these cases, can be specific findings in Leigh syndrome patients with cytochrome C oxidase deficiency. SURF-1 gene mutations must be particularly reviewed in such patients.

  18. Early and late temporo-spatial effects of contextual interference during perception of facial affect.

    PubMed

    Frühholz, Sascha; Fehr, Thorsten; Herrmann, Manfred

    2009-10-01

    Contextual features during recognition of facial affect are assumed to modulate the temporal course of emotional face processing. Here, we simultaneously presented colored backgrounds during valence categorizations of facial expressions. Subjects incidentally learned to perceive negative, neutral and positive expressions within a specific colored context. Subsequently, subjects made fast valence judgments while presented with the same face-color-combinations as in the first run (congruent trials) or with different face-color-combinations (incongruent trials). Incongruent trials induced significantly increased response latencies and significantly decreased performance accuracy. Contextual incongruent information during processing of neutral expressions modulated the P1 and the early posterior negativity (EPN) both localized in occipito-temporal areas. Contextual congruent information during emotional face perception revealed an emotion-related modulation of the P1 for positive expressions and of the N170 and the EPN for negative expressions. Highest amplitude of the N170 was found for negative expressions in a negatively associated context and the N170 amplitude varied with the amount of overall negative information. Incongruent trials with negative expressions elicited a parietal negativity which was localized to superior parietal cortex and which most likely represents a posterior manifestation of the N450 as an indicator of conflict processing. A sustained activation of the late LPP over parietal cortex for all incongruent trials might reflect enhanced engagement with facial expression during task conditions of contextual interference. In conclusion, whereas early components seem to be sensitive to the emotional valence of facial expression in specific contexts, late components seem to subserve interference resolution during emotional face processing.

  19. Facial Nerve Schwannoma: A Case Report, Radiological Features and Literature Review.

    PubMed

    Pilloni, Giulia; Mico, Barbara Massa; Altieri, Roberto; Zenga, Francesco; Ducati, Alessandro; Garbossa, Diego; Tartara, Fulvio

    2017-12-22

    Facial nerve schwannoma localized in the middle fossa is a rare lesion. We report a case of a facial nerve schwannoma in a 30-year-old male presenting with facial nerve palsy. Magnetic resonance imaging (MRI) showed a 3 cm diameter tumor of the right middle fossa. The tumor was removed using a sub-temporal approach. Intraoperative monitoring allowed for identification of the facial nerve, so it was not damaged during the surgical excision. Neurological clinical examination at discharge demonstrated moderate facial nerve improvement (Grade III House-Brackmann).

  20. Comparison of self-reported signs of facial ageing among Caucasian women in Australia versus those in the USA, the UK and Canada.

    PubMed

    Goodman, Greg J; Armour, Katherine S; Kolodziejczyk, Julia K; Santangelo, Samantha; Gallagher, Conor J

    2018-05-01

    Australians are more exposed to higher solar UV radiation levels that accelerate signs of facial ageing than individuals who live in temperate northern countries. The severity and course of self-reported facial ageing among fair-skinned Australian women were compared with those living in Canada, the UK and the USA. Women voluntarily recruited into a proprietary opt-in survey panel completed an internet-based questionnaire about their facial ageing. Participants aged 18-75 years compared their features against photonumeric rating scales depicting degrees of severity for forehead, crow's feet and glabellar lines, tear troughs, midface volume loss, nasolabial folds, oral commissures and perioral lines. Data from Caucasian and Asian women with Fitzpatrick skin types I-III were analysed by linear regression for the impact of country (Australia versus Canada, the UK and the USA) on ageing severity for each feature, after controlling for age and race. Among 1472 women, Australians reported higher rates of change and significantly more severe facial lines (P ≤ 0.040) and volume-related features like tear troughs and nasolabial folds (P ≤ 0.03) than women from the other countries. More Australians also reported moderate to severe ageing for all features one to two decades earlier than US women. Australian women reported more severe signs of facial ageing sooner than other women and volume-related changes up to 20 years earlier than those in the USA, which may suggest that environmental factors also impact volume-related ageing. These findings have implications for managing their facial aesthetic concerns. © 2017 The Authors. Australasian Journal of Dermatology published by John Wiley and Sons Australia, Ltd on behalf of The Australasian College of Dermatologists.

  1. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  2. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  3. Ethnic and Gender Considerations in the Use of Facial Injectables: Asian Patients.

    PubMed

    Liew, Steven

    2015-11-01

    Asians have distinct facial characteristics due to underlying skeletal and morphological features that differ greatly with those of whites. This together with the higher sun protection factor and the differences in the quality of the skin and soft tissue create a profound effect on their aging process. Understanding of these differences and their effects in the aging process in Asians is crucial in determining effective utilization and placement of injectable products to ensure optimal aesthetic outcomes. For younger Asian women, the main treatment goal is to address the inherent structural deficits through reshaping and the provision of facial support. Facial injectables are used to provide anterior projection, to reduce facial width, and to lengthen facial height. In the older group, the aim is for rejuvenation and also to address the underlying structural issues that has compounded due to age-related volume loss. Asian women requesting cosmetic procedures do not want to be Westernized but rather seeking to enhance and optimize their Asian ethnic features.

  4. Alteration of Occlusal Plane in Orthognathic Surgery: Clinical Features to Help Treatment Planning on Class III Patients

    PubMed Central

    Costa, Tony Eduardo; Barbosa, Saulo de Matos; Pereira, Rodrigo Alvitos; Chaves Netto, Henrique Duque de Miranda

    2018-01-01

    Dentofacial deformities (DFD) presenting mainly as Class III malocclusions that require orthognathic surgery as a part of definitive treatment. Class III patients can have obvious signs such as increasing the chin projection and chin throat length, nasolabial folds, reverse overjet, and lack of upper lip support. However, Class III patients can present different facial patterns depending on the angulation of occlusal plane (OP), and only bite correction does not always lead to the improvement of the facial esthetic. We described two Class III patients with different clinical features and inclination of OP and had undergone different treatment planning based on 6 clinical features: (I) facial type; (II) upper incisor display at rest; (III) dental and gingival display on smile; (IV) soft tissue support; (V) chin projection; and (VI) lower lip projection. These patients were submitted to orthognathic surgery with different treatment plannings: a clockwise rotation and counterclockwise rotation of OP according to their facial features. The clinical features and OP inclination helped to define treatment planning by clockwise and counterclockwise rotations of the maxillomandibular complex, and two patients undergone to bimaxillary orthognathic surgery showed harmonic outcomes and stables after 2 years of follow-up. PMID:29854480

  5. Orientations for the successful categorization of facial expressions and their link with facial features.

    PubMed

    Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel

    2017-12-01

    Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

  6. Unfakeable facial configurations affect strategic choices in trust games with or without information about past behavior.

    PubMed

    Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y; Chater, Nick

    2012-01-01

    Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.

  7. Unfakeable Facial Configurations Affect Strategic Choices in Trust Games with or without Information about Past Behavior

    PubMed Central

    Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y.; Chater, Nick

    2012-01-01

    Background Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Methodology/Principal Findings Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Conclusions/Significance Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available. PMID:22470553

  8. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  9. Characterizing facial features in individuals with craniofacial microsomia: A systematic approach for clinical research.

    PubMed

    Heike, Carrie L; Wallace, Erin; Speltz, Matthew L; Siebold, Babette; Werler, Martha M; Hing, Anne V; Birgfeld, Craig B; Collett, Brent R; Leroux, Brian G; Luquetti, Daniela V

    2016-11-01

    Craniofacial microsomia (CFM) is a congenital condition with wide phenotypic variability, including hypoplasia of the mandible and external ear. We assembled a cohort of children with facial features within the CFM spectrum and children without known craniofacial anomalies. We sought to develop a standardized approach to assess and describe the facial characteristics of the study cohort, using multiple sources of information gathered over the course of this longitudinal study and to create case subgroups with shared phenotypic features. Participants were enrolled between 1996 and 2002. We classified the facial phenotype from photographs, ratings using a modified version of the Orbital, Ear, Mandible, Nerve, Soft tissue (OMENS) pictorial system, data from medical record abstraction, and health history questionnaires. The participant sample included 142 cases and 290 controls. The average age was 13.5 years (standard deviation, 1.3 years; range, 11.1-17.1 years). Sixty-one percent of cases were male, 74% were white non-Hispanic. Among cases, the most common features were microtia (66%) and mandibular hypoplasia (50%). Case subgroups with meaningful group definitions included: (1) microtia without other CFM-related features (n = 24), (2) microtia with mandibular hypoplasia (n = 46), (3) other combinations of CFM- related facial features (n = 51), and (4) atypical features (n = 21). We developed a standardized approach for integrating multiple data sources to phenotype individuals with CFM, and created subgroups based on clinically-meaningful, shared characteristics. We hope that this system can be used to explore associations between phenotype and clinical outcomes of children with CFM and to identify the etiology of CFM. Birth Defects Research (Part A) 106:915-926, 2016.© 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Developmental Change in Infant Categorization: The Perception of Correlations among Facial Features.

    ERIC Educational Resources Information Center

    Younger, Barbara

    1992-01-01

    Tested 7 and 10 month olds for perception of correlations among facial features. After habituation to faces displaying a pattern of correlation, 10 month olds generalized to a novel face that preserved the pattern of correlation but showed increased attention to a novel face that violated the pattern. (BC)

  11. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    NASA Astrophysics Data System (ADS)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  12. Facial Expression Influences Face Identity Recognition During the Attentional Blink

    PubMed Central

    2014-01-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076

  13. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  14. Cephalometric features in isolated growth hormone deficiency.

    PubMed

    Oliveira-Neto, Luiz Alves; Melo, Mariade de Fátima B; Franco, Alexandre A; Oliveira, Alaíde H A; Souza, Anita H O; Valença, Eugênia H O; Britto, Isabela M P A; Salvatori, Roberto; Aguiar-Oliveira, Manuel H

    2011-07-01

    To analyze cephalometric features in adults with isolated growth hormone (GH) deficiency (IGHD). Nine adult IGHD individuals (7 males and 2 females; mean age, 37.8 ± 13.8 years) underwent a cross-sectional cephalometric study, including 9 linear and 5 angular measurements. Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were calculated. To pool cephalometric measurements in both genders, results were normalized by standard deviation scores (SDS), using the population means from an atlas of the normal Brazilian population. All linear measurements were reduced in IGHD subjects. Total maxillary length was the most reduced parameter (-6.5 ± 1.7), followed by a cluster of six measurements: posterior cranial base length (-4.9 ± 1.1), total mandibular length (-4.4 ± 0.7), total posterior facial height (-4.4 ± 1.1), total anterior facial height (-4.3 ± 0.9), mandibular corpus length (-4.2 ± 0.8), and anterior cranial base length (-4.1 ± 1.7). Less affected measurements were lower-anterior facial height (-2.7 ± 0.7) and mandibular ramus height (-2.5 ± 1.5). SDS angular measurements were in the normal range, except for increased gonial angle (+2.5 ± 1.1). Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were not different from those of the reference group. Congenital, untreated IGHD causes reduction of all linear measurements of craniofacial growth, particularly total maxillary length. Angular measurements and facial height ratios are less affected, suggesting that lGHD causes proportional blunting of craniofacial growth.

  15. Automatic processing of facial affects in patients with borderline personality disorder: associations with symptomatology and comorbid disorders.

    PubMed

    Donges, Uta-Susan; Dukalski, Bibiana; Kersting, Anette; Suslow, Thomas

    2015-01-01

    Instability of affects and interpersonal relations are important features of borderline personality disorder (BPD). Interpersonal problems of individuals suffering from BPD might develop based on abnormalities in the processing of facial affects and high sensitivity to negative affective expressions. The aims of the present study were to examine automatic evaluative shifts and latencies as a function of masked facial affects in patients with BPD compared to healthy individuals. As BPD comorbidity rates for mental and personality disorders are high, we investigated also the relationships of affective processing characteristics with specific borderline symptoms and comorbidity. Twenty-nine women with BPD and 38 healthy women participated in the study. The majority of patients suffered from additional Axis I disorders and/or additional personality disorders. In the priming experiment, angry, happy, neutral, or no facial expression was briefly presented (for 33 ms) and masked by neutral faces that had to be evaluated. Evaluative decisions and response latencies were registered. Borderline-typical symptomatology was assessed with the Borderline Symptom List. In the total sample, valence-congruent evaluative shifts and delays of evaluative decision due to facial affect were observed. No between-group differences were obtained for evaluative decisions and latencies. The presence of comorbid anxiety disorders was found to be positively correlated with evaluative shifting owing to masked happy primes, regardless of baseline-neutral or no facial expression condition. The presence of comorbid depressive disorder, paranoid personality disorder, and symptoms of social isolation and self-aggression were significantly correlated with response delay due to masked angry faces, regardless of baseline. In the present affective priming study, no abnormalities in the automatic recognition and processing of facial affects were observed in BPD patients compared to healthy individuals. The presence of comorbid anxiety disorders could make patients more susceptible to the influence of a happy expression on judgment processes at an automatic processing level. Comorbid depressive disorder, paranoid personality disorder, and symptoms of social isolation and self-aggression may enhance automatic attention allocation to threatening facial expressions in BPD. Increased automatic vigilance for social threat stimuli might contribute to affective instability and interpersonal problems in specific patients with BPD.

  16. Ethnic and Gender Considerations in the Use of Facial Injectables: African-American Patients.

    PubMed

    Burgess, Cheryl; Awosika, Olabola

    2015-11-01

    The United States is becoming increasingly more diverse as the nonwhite population continues to rise faster than ever. By 2044, the US Census Bureau projects that greater than 50% of the US population will be of nonwhite descent. Ethnic patients are the quickest growing portion of the cosmetic procedures market, with African-Americans comprising 7.1% of the 22% of ethnic minorities who received cosmetic procedures in the United States in 2014. The cosmetic concerns and natural features of this ethnic population are unique and guided by differing structural and aging processes than their white counterparts. As people of color increasingly seek nonsurgical cosmetic procedures, dermatologists and cosmetic surgeons must become aware that the Westernized look does not necessarily constitute beauty in these diverse people. The use of specialized aesthetic approaches and understanding of cultural and ethnic-specific features are warranted in the treatment of these patients. This article will review the key principles to consider when treating African-American patients, including the average facial structure of African-Americans, the impact of their ethnicity on aging and structure of face, and soft-tissue augmentation strategies specific to African-American skin.

  17. Facial feedback and autonomic responsiveness reflect impaired emotional processing in Parkinson's Disease.

    PubMed

    Balconi, Michela; Pala, Francesca; Manenti, Rosa; Brambilla, Michela; Cobelli, Chiara; Rosini, Sandra; Benussi, Alberto; Padovani, Alessandro; Borroni, Barbara; Cotelli, Maria

    2016-08-11

    Emotional deficits are part of the non-motor features of Parkinson's disease but few attention has been paid to specific aspects such as subjective emotional experience and autonomic responses. This study aimed to investigate the mechanisms of emotional recognition in Parkinson's Disease (PD) using the following levels: explicit evaluation of emotions (Self-Assessment Manikin) and implicit reactivity (Skin Conductance Response; electromyographic measure of facial feedback of the zygomaticus and corrugator muscles). 20 PD Patients and 34 healthy controls were required to observe and evaluate affective pictures during physiological parameters recording. In PD, the appraisal process on both valence and arousal features of emotional cues were preserved, but we found significant impairment in autonomic responses. Specifically, in comparison to healthy controls, PD patients revealed lower Skin Conductance Response values to negative and high arousing emotional stimuli. In addition, the electromyographic measures showed defective responses exclusively limited to negative and high arousing emotional category: PD did not show increasing of corrugator activity in response to negative emotions as happened in heathy controls. PD subjects inadequately respond to the emotional categories which were considered more "salient": they had preserved appraisal process, but impaired automatic ability to distinguish between different emotional contexts.

  18. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  19. Recent Advances in Face Lift to Achieve Facial Balance.

    PubMed

    Ilankovan, Velupillai

    2017-03-01

    Facial balance is achieved by correction of facial proportions and the facial contour. Ageing affects this balance in addition to other factors. We have strived to inform all the recent advances in providing this balance. The anatomy of ageing including various changed in clinical features are described. The procedures are explained on the basis of the upper, middle and lower face. Different face lift, neck lift procedures with innovative techniques are demonstrated. The aim is to provide an unoperated balanced facial proportion with zero complication.

  20. Hierarchical ensemble of global and local classifiers for face recognition.

    PubMed

    Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2009-08-01

    In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.

  1. Image-based Analysis of Emotional Facial Expressions in Full Face Transplants.

    PubMed

    Bedeloglu, Merve; Topcu, Çagdas; Akgul, Arzu; Döger, Ela Naz; Sever, Refik; Ozkan, Ozlenen; Ozkan, Omer; Uysal, Hilmi; Polat, Ovunc; Çolak, Omer Halil

    2018-01-20

    In this study, it is aimed to determine the degree of the development in emotional expression of full face transplant patients from photographs. Hence, a rehabilitation process can be planned according to the determination of degrees as a later work. As envisaged, in full face transplant cases, the determination of expressions can be confused or cannot be achieved as the healthy control group. In order to perform image-based analysis, a control group consist of 9 healthy males and 2 full-face transplant patients participated in the study. Appearance-based Gabor Wavelet Transform (GWT) and Local Binary Pattern (LBP) methods are adopted for recognizing neutral and 6 emotional expressions which consist of angry, scared, happy, hate, confused and sad. Feature extraction was carried out by using both methods and combination of these methods serially. In the performed expressions, the extracted features of the most distinct zones in the facial area where the eye and mouth region, have been used to classify the emotions. Also, the combination of these region features has been used to improve classifier performance. Control subjects and transplant patients' ability to perform emotional expressions have been determined with K-nearest neighbor (KNN) classifier with region-specific and method-specific decision stages. The results have been compared with healthy group. It has been observed that transplant patients don't reflect some emotional expressions. Also, there were confusions among expressions.

  2. An easy game for frauds? Effects of professional experience and time pressure on passport-matching performance.

    PubMed

    Wirth, Benedikt Emanuel; Carbon, Claus-Christian

    2017-06-01

    Despite extensive research on unfamiliar face matching, little is known about factors that might affect matching performance in real-life scenarios. We conducted 2 experiments to investigate the effects of several such factors on unfamiliar face-matching performance in a passport-check scenario. In Experiment 1, we assessed the effect of professional experience on passport-matching performance. The matching performance of 96 German Federal Police officers working at Munich Airport was compared with that of 48 novices without specific face-matching experience. Police officers significantly outperformed novices, but nevertheless missed a high ratio of frauds. Moreover, the effects of manipulating specific facial features (with paraphernalia like glasses and jewelry, distinctive features like moles and scars, and hairstyle) and of variations in the physical distance between the faces being matched were investigated. Whereas manipulation of physical distance did not have a significant effect, manipulations of facial features impaired matching performance. In Experiment 2, passport-matching performance was assessed in relation to time constraints. Novices matched passports either without time constraints, or under a local time limit (which is typically used in laboratory studies), or under a global time limit (which usually occurs during real-life border controls). Time pressure (especially the global time limit) significantly impaired matching performance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users

    NASA Astrophysics Data System (ADS)

    Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi

    This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.

  4. Feature instructions improve face-matching accuracy

    PubMed Central

    Bindemann, Markus

    2018-01-01

    Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822

  5. Cone beam tomographic study of facial structures characteristics at rest and wide smile, and their correlation with the facial types.

    PubMed

    Martins, Luciana Flaquer; Vigorito, Julio Wilson

    2013-01-01

    To determine the characteristics of facial soft tissues at rest and wide smile, and their possible relation to the facial type. We analyzed a sample of forty-eight young female adults, aged between 19.10 and 40 years old, with a mean age of 30.9 years, who had balanced profile and passive lip seal. Cone beam computed tomographies were performed at rest and wide smile postures on the entire sample which was divided into three groups according to individual facial types. Soft tissue features analysis of the lips, nose, zygoma and chin were done in sagittal, axial and frontal axis tomographic views. No differences were observed in any of the facial type variables for the static analysis of facial structures at both rest and wide smile postures. Dynamic analysis showed that brachifacial types are more sensitive to movement, presenting greater sagittal lip contraction. However, the lip movement produced by this type of face results in a narrow smile, with smaller tooth exposure area when compared with other facial types. Findings pointed out that the position of the upper lip should be ahead of the lower lip, and the latter, ahead of the pogonion. It was also found that the facial type does not impact the positioning of these structures. Additionally, the use of cone beam computed tomography may be a valuable method to study craniofacial features.

  6. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    PubMed

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  7. An Inner Face Advantage in Children's Recognition of Familiar Peers

    ERIC Educational Resources Information Center

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  8. Artistic shaping of key facial features in children and adolescents.

    PubMed

    Sullivan, P K; Singer, D P

    2001-12-01

    Facial aesthetics can be enhanced by otoplasty, rhinoplasty and genioplasty. Excellent outcomes can be obtained given appropriate timing, patient selection, preoperative planning, and artistic sculpting of the region with the appropriate surgical technique. Choosing a patient with mature psychological, developmental, and anatomic features that are amenable to treatment in the pediatric population can be challenging, yet rewarding.

  9. Facial expression reconstruction on the basis of selected vertices of triangle mesh

    NASA Astrophysics Data System (ADS)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.

  10. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  11. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    PubMed

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  12. Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.

    PubMed

    Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu

    2015-01-01

    Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.

  13. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  14. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.

    PubMed

    Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui

    2017-03-29

    In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.

  15. Hybrid generative-discriminative approach to age-invariant face recognition

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad; Shafique, Tamoor

    2018-03-01

    Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.

  16. Observer success rates for identification of 3D surface reconstructed facial images and implications for patient privacy and security

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Siddiqui, Khan M.; Fort, Leslie; Moffitt, Ryan; Juluru, Krishna; Kim, Woojin; Safdar, Nabile; Siegel, Eliot L.

    2007-03-01

    3D and multi-planar reconstruction of CT images have become indispensable in the routine practice of diagnostic imaging. These tools cannot only enhance our ability to diagnose diseases, but can also assist in therapeutic planning as well. The technology utilized to create these can also render surface reconstructions, which may have the undesired potential of providing sufficient detail to allow recognition of facial features and consequently patient identity, leading to violation of patient privacy rights as described in the HIPAA (Health Insurance Portability and Accountability Act) legislation. The purpose of this study is to evaluate whether 3D reconstructed images of a patient's facial features can indeed be used to reliably or confidently identify that specific patient. Surface reconstructed images of the study participants were created used as candidates for matching with digital photographs of participants. Data analysis was performed to determine the ability of observers to successfully match 3D surface reconstructed images of the face with facial photographs. The amount of time required to perform the match was recorded as well. We also plan to investigate the ability of digital masks or physical drapes to conceal patient identity. The recently expressed concerns over the inability to truly "anonymize" CT (and MRI) studies of the head/face/brain are yet to be tested in a prospective study. We believe that it is important to establish whether these reconstructed images are a "threat" to patient privacy/security and if so, whether minimal interventions from a clinical perspective can substantially reduce this possibility.

  17. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans

    PubMed Central

    Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred

    2012-01-01

    Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347

  18. Complexion Care; Cosmetology 2: 9207.02.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL.

    Requiring 135 hours of classroom-laboratory instruction, the course develops skill in giving facial treatments including all massage manipulations, along with knowing the purpose of facials and of the anatomy and physiology related to it. The application of make-up for all types of skin and facial features is an integral part of the program. The…

  19. Chronic neuropathic facial pain after intense pulsed light hair removal. Clinical features and pharmacological management

    PubMed Central

    Párraga-Manzol, Gabriela; Sánchez-Torres, Alba; Moreno-Arias, Gerardo

    2015-01-01

    Intense Pulsed Light (IPL) photodepilation is usually performed as a hair removal method. The treatment is recommended to be indicated by a physician, depending on each patient and on its characteristics. However, the use of laser devices by medical laypersons is frequent and it can suppose a risk of damage for the patients. Most side effects associated to IPL photodepilation are transient, minimal and disappear without sequelae. However, permanent side effects can occur. Some of the complications are laser related but many of them are caused by an operator error or mismanagement. In this work, we report a clinical case of a patient that developed a chronic neuropathic facial pain following IPL hair removal for unwanted hair in the upper lip. The specific diagnosis was painful post-traumatic trigeminal neuropathy, reference 13.1.2.3 according to the International Headache Society (IHS). Key words:Neuropathic facial pain, photodepilation, intense pulse light. PMID:26535105

  20. What Do Infants See in Faces? ERP Evidence of Different Roles of Eyes and Mouth for Face Perception in 9-Month-Old Infants

    ERIC Educational Resources Information Center

    Key, Alexandra P. F.; Stone, Wendy; Williams, Susan M.

    2009-01-01

    The study examined whether face-specific perceptual brain mechanisms in 9-month-old infants are differentially sensitive to changes in individual facial features (eyes versus mouth) and whether sensitivity to such changes is related to infants' social and communicative skills. Infants viewed photographs of a smiling unfamiliar female face. On 30%…

  1. Judgment of Nasolabial Esthetics in Cleft Lip and Palate Is Not Influenced by Overall Facial Attractiveness.

    PubMed

    Kocher, Katharina; Kowalski, Piotr; Kolokitha, Olga-Elpis; Katsaros, Christos; Fudalej, Piotr S

    2016-05-01

    To determine whether judgment of nasolabial esthetics in cleft lip and palate (CLP) is influenced by overall facial attractiveness. Experimental study. University of Bern, Switzerland. Seventy-two fused images (36 of boys, 36 of girls) were constructed. Each image comprised (1) the nasolabial region of a treated child with complete unilateral CLP (UCLP) and (2) the external facial features, i.e., the face with masked nasolabial region, of a noncleft child. Photographs of the nasolabial region of six boys and six girls with UCLP representing a wide range of esthetic outcomes, i.e., from very good to very poor appearance, were randomly chosen from a sample of 60 consecutively treated patients in whom nasolabial esthetics had been rated in a previous study. Photographs of external facial features of six boys and six girls without UCLP with various esthetics were randomly selected from patients' files. Eight lay raters evaluated the fused images using a 100-mm visual analogue scale. Method reliability was assessed by reevaluation of fused images after >1 month. A regression model was used to analyze which elements of facial esthetics influenced the perception of nasolabial appearance. Method reliability was good. A regression analysis demonstrated that only the appearance of the nasolabial area affected the esthetic scores of fused images (coefficient = -11.44; P < .001; R(2) = 0.464). The appearance of the external facial features did not influence perceptions of fused images. Cropping facial images for assessment of nasolabial appearance in CLP seems unnecessary. Instead, esthetic evaluation can be performed on images of full faces.

  2. Brief Report: Infants Developing with ASD Show a Unique Developmental Pattern of Facial Feature Scanning

    ERIC Educational Resources Information Center

    Rutherford, M. D.; Walsh, Jennifer A.; Lee, Vivian

    2015-01-01

    Infants are interested in eyes, but look preferentially at mouths toward the end of the first year, when word learning begins. Language delays are characteristic of children developing with autism spectrum disorder (ASD). We measured how infants at risk for ASD, control infants, and infants who later reached ASD criterion scanned facial features.…

  3. Reading Faces: From Features to Recognition.

    PubMed

    Guntupalli, J Swaroop; Gobbini, M Ida

    2017-12-01

    Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    PubMed

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  5. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    PubMed Central

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  6. Learning the spherical harmonic features for 3-D face recognition.

    PubMed

    Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming

    2013-03-01

    In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method.

  7. Variation of facial features among three African populations: Body height match analyses.

    PubMed

    Taura, M G; Adamu, L H; Gudaji, A

    2017-01-01

    Body height is one of the variables that show a correlation with facial craniometry. Here we seek to discriminate the three populations (Nigerians, Ugandans and Kenyans) using facial craniometry based on different categories of body height of adult males. A total of 513 individuals comprising 234 Nigerians, 169 Ugandans and 110 Kenyans with mean age of 25.27, s=5.13 (18-40 years) participated. Paired and unpaired facial features were measured using direct craniometry. Multivariate and stepwise discriminate function analyses were used for differentiation of the three populations. The result showed significant overall facial differences among the three populations in all the body height categories. Skull height, total facial height, outer canthal distance, exophthalmometry, right ear width and nasal length were significantly different among the three different populations irrespective of body height categories. Other variables were sensitive to body height. Stepwise discriminant function analyses included maximum of six variables for better discrimination between the three populations. The single best discriminator of the groups was total facial height, however, for body height >1.70m the single best discriminator was nasal length. Most of the variables were better used with function 1, hence, better discrimination than function 2. In conclusion, adult body height in addition to other factors such as age, sex, and ethnicity should be considered in making decision on facial craniometry. However, not all the facial linear dimensions were sensitive to body height. Copyright © 2016 Elsevier GmbH. All rights reserved.

  8. Photogrammetric Analysis of Attractiveness in Indian Faces

    PubMed Central

    Duggal, Shveta; Kapoor, DN; Verma, Santosh; Sagar, Mahesh; Lee, Yung-Seop; Moon, Hyoungjin

    2016-01-01

    Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population. PMID:27019809

  9. Is NF-1 gene deletion the molecular mechanism of neurofibromatosis type 1 with destinctive facies?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leppig, K.A.; Stephens, K.G.; Viskochill, D.

    We have studied a patient with neurofibromatosis type 1 and unusual facial features using fluorescence in situ hybridization (FISH) and found that the patient had a deletion that minimially encompasses exon 2-11 of the NF-1 gene. The patient was one of two individuals initially described by Kaplan and Rosenblatt who suggested that another condition aside from neurofibromatosis type 1 may account for the unusual facial features observed in these patients with neurofibromatosis type 1. FISH studies were performed using a P1 clone probe, P1-9, which contains exons 2-11 of the NF-1 gene on chromosomes prepared from the patients. In allmore » 20 metaphase cells analyzed, one of the chromosome 17 homologues was deleted for the P1-9 probe. Therefore, this patient had neurofibromatosis type 1 and unusual facial features as the result of a deletion which minimally includes exons 2-11 of the NF-1 gene. The extent of the deletion is being mapped by FISH and somatic cell hybrid analysis. The patient studied was a 7-year-old male with mild developmental delays, normal growth parameters, and physical findings consistent with neurofibromatosis type 1, including multiple cafe au lait spots, several curaneous neurofibroma, and speckling of the irises. In addition, his unusual facial features consisted of telecanthus, antimongoloid slant of the palpebral fissures, a broad base of the nose, low set and mildly posteriorly rotated ears, thick helices, high arched palate, short and pointed chin, and low posterior hairline. We propose that deletions of the NF-1 gene and/or contiguous genes are the etiology of neurofibromatosis type 1 and unusual facial features. This particular facial appearance was inherited from the patient`s mother and has been described in other individuals with neurofibromatosis type 1. We are using FISH to rapidly screen patients with this phenotype for large deletions involving the NF-1 gene and flanking DNA sequences.« less

  10. Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.

    PubMed

    Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A

    2017-12-01

    To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  11. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  12. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    PubMed

    Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique

    2015-01-01

    Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.

  13. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive

    PubMed Central

    Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique

    2015-01-01

    Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness. PMID:26161954

  14. A three-dimensional soft tissue analysis of Class III malocclusion: a case-controlled cross-sectional study.

    PubMed

    Johal, Ama; Chaggar, Amrit; Zou, Li Fong

    2018-03-01

    The present study used the optical surface laser scanning technique to compare the facial features of patients aged 8-18 years presenting with Class I and Class III incisor relationship in a case-control design. Subjects with a Class III incisor relationship, aged 8-18 years, were age and gender matched with Class I control and underwent a 3-dimensional (3-D) optical surface scan of the facial soft tissues. Landmark analysis revealed Class III subjects displayed greater mean dimensions compared to the control group most notably between the ages of 8-10 and 17-18 years in both males and females, in respect of antero-posterior (P = 0.01) and vertical (P = 0.006) facial dimensions. Surface-based analysis, revealed the greatest difference in the lower facial region, followed by the mid-face, whilst the upper face remained fairly consistent. Significant detectable differences were found in the surface facial features of developing Class III subjects.

  15. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Improving the Quality of Facial Composites Using a Holistic Cognitive Interview

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.

    2008-01-01

    Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…

  17. Feature Selection on Hyperspectral Data for Dismount Skin Analysis

    DTIC Science & Technology

    2014-03-27

    19 2.4.1 Melanosome Estimation . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.2 Facial Recognition using...require compliant interaction in order to establish their identification. Previously, traditional facial recognition systems have been enhanced by HSI by...calculated as a fundamental method to differentiate between people [38]. In addition, the area of facial recognition has benefited from the rich spectral

  18. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  19. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  20. [Facial palsy].

    PubMed

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  1. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.

    PubMed

    Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo

    2015-06-01

    The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.

  2. Emotion categories and dimensions in the facial communication of affect: An integrated approach.

    PubMed

    Mehu, Marc; Scherer, Klaus R

    2015-12-01

    We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).

  3. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates

    PubMed Central

    Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui

    2014-01-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898

  4. When Age Matters: Differences in Facial Mimicry and Autonomic Responses to Peers' Emotions in Teenagers and Adults

    PubMed Central

    Ardizzi, Martina; Sestito, Mariateresa; Martini, Francesca; Umiltà, Maria Alessandra; Ravera, Roberto; Gallese, Vittorio

    2014-01-01

    Age-group membership effects on explicit emotional facial expressions recognition have been widely demonstrated. In this study we investigated whether Age-group membership could also affect implicit physiological responses, as facial mimicry and autonomic regulation, to observation of emotional facial expressions. To this aim, facial Electromyography (EMG) and Respiratory Sinus Arrhythmia (RSA) were recorded from teenager and adult participants during the observation of facial expressions performed by teenager and adult models. Results highlighted that teenagers exhibited greater facial EMG responses to peers' facial expressions, whereas adults showed higher RSA-responses to adult facial expressions. The different physiological modalities through which young and adults respond to peers' emotional expressions are likely to reflect two different ways to engage in social interactions with coetaneous. Findings confirmed that age is an important and powerful social feature that modulates interpersonal interactions by influencing low-level physiological responses. PMID:25337916

  5. Aspects of Facial Contrast Decrease with Age and Are Cues for Age Perception

    PubMed Central

    Porcheron, Aurélie; Mauger, Emmanuelle; Russell, Richard

    2013-01-01

    Age is a primary social dimension. We behave differently toward people as a function of how old we perceive them to be. Age perception relies on cues that are correlated with age, such as wrinkles. Here we report that aspects of facial contrast–the contrast between facial features and the surrounding skin–decreased with age in a large sample of adult Caucasian females. These same aspects of facial contrast were also significantly correlated with the perceived age of the faces. Individual faces were perceived as younger when these aspects of facial contrast were artificially increased, but older when these aspects of facial contrast were artificially decreased. These findings show that facial contrast plays a role in age perception, and that faces with greater facial contrast look younger. Because facial contrast is increased by typical cosmetics use, we infer that cosmetics function in part by making the face appear younger. PMID:23483959

  6. Antenatal diagnosis of complete facial duplication--a case report of a rare craniofacial defect.

    PubMed

    Rai, V S; Gaffney, G; Manning, N; Pirrone, P G; Chamberlain, P F

    1998-06-01

    We report a case of the prenatal sonographic detection of facial duplication, the diprosopus abnormality, in a twin pregnancy. The characteristic sonographic features of the condition include duplication of eyes, mouth, nose and both mid- and anterior intracranial structures. A heart-shaped abnormality of the cranial vault should prompt more detailed examination for other supportive features of this rare condition.

  7. Hallermann-Streiff Syndrome

    PubMed Central

    Thomas, Jayakar; Ragavi, B Sindhu; Raneesha, PK; Ahmed, N Ashwak; Cynthia, S; Manoharan, D; Manoharan, R

    2013-01-01

    Hallermann-Streiff syndrome (HSS) is a rare disorder characterized by dyscephalia, with facial and dental abnormalities. We report a 12-year-old female child who presented with abnormal facial features, dental abnormalities and sparse scalp hair. PMID:24082185

  8. AAEM case report #26: seventh cranial neuropathy.

    PubMed

    Gilchrist, J M

    1993-05-01

    A 25-year-old man with acute, bilateral facial palsies is presented. He had a lymphocytic meningitis, history of tick bites, and lived in an area endemic for Lyme disease, which was ultimately confirmed by serology. Electrodiagnostic investigation included facial motor nerve study, blink reflex and electromyography of facial muscles, which were indicative of a neurapraxic lesion on the right and an axonopathic lesion on the left. The clinical course was consistent with these findings as the right side fully recovered and the left remained plegic. The clinical features of Lyme associated facial neuritis are reviewed, as is the electrodiagnostic evaluation of facial palsy.

  9. Soft-tissue facial characteristics of attractive Chinese men compared to normal men.

    PubMed

    Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing

    2015-01-01

    To compare the facial characteristics of attractive Chinese men with those of reference men. The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 "attractive" men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces.

  10. The distinguishing motor features of cataplexy: a study from video-recorded attacks.

    PubMed

    Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe

    2018-05-01

    To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.

  11. ADCY5-related dyskinesia

    PubMed Central

    Chen, Dong-Hui; Méneret, Aurélie; Friedman, Jennifer R.; Korvatska, Olena; Gad, Alona; Bonkowski, Emily S.; Stessman, Holly A.; Doummar, Diane; Mignot, Cyril; Anheim, Mathieu; Bernes, Saunder; Davis, Marie Y.; Damon-Perrière, Nathalie; Degos, Bertrand; Grabli, David; Gras, Domitille; Hisama, Fuki M.; Mackenzie, Katherine M.; Swanson, Phillip D.; Tranchant, Christine; Vidailhet, Marie; Winesett, Steven; Trouillard, Oriane; Amendola, Laura M.; Dorschner, Michael O.; Weiss, Michael; Eichler, Evan E.; Torkamani, Ali; Roze, Emmanuel

    2015-01-01

    Objective: To investigate the clinical spectrum and distinguishing features of adenylate cyclase 5 (ADCY5)–related dyskinesia and genotype–phenotype relationship. Methods: We analyzed ADCY5 in patients with choreiform or dystonic movements by exome or targeted sequencing. Suspected mosaicism was confirmed by allele-specific amplification. We evaluated clinical features in our 50 new and previously reported cases. Results: We identified 3 new families and 12 new sporadic cases with ADCY5 mutations. These mutations cause a mixed hyperkinetic disorder that includes dystonia, chorea, and myoclonus, often with facial involvement. The movements are sometimes painful and show episodic worsening on a fluctuating background. Many patients have axial hypotonia. In 2 unrelated families, a p.A726T mutation in the first cytoplasmic domain (C1) causes a relatively mild disorder of prominent facial and hand dystonia and chorea. Mutations p.R418W or p.R418Q in C1, de novo in 13 individuals and inherited in 1, produce a moderate to severe disorder with axial hypotonia, limb hypertonia, paroxysmal nocturnal or diurnal dyskinesia, chorea, myoclonus, and intermittent facial dyskinesia. Somatic mosaicism is usually associated with a less severe phenotype. In one family, a p.M1029K mutation in the C2 domain causes severe dystonia, hypotonia, and chorea. The progenitor, whose childhood-onset episodic movement disorder almost disappeared in adulthood, was mosaic for the mutation. Conclusions: ADCY5-related dyskinesia is a childhood-onset disorder with a wide range of hyperkinetic abnormal movements. Genotype-specific correlations and mosaicism play important roles in the phenotypic variability. Recurrent mutations suggest particular functional importance of residues 418 and 726 in disease pathogenesis. PMID:26537056

  12. ADCY5-related dyskinesia: Broader spectrum and genotype-phenotype correlations.

    PubMed

    Chen, Dong-Hui; Méneret, Aurélie; Friedman, Jennifer R; Korvatska, Olena; Gad, Alona; Bonkowski, Emily S; Stessman, Holly A; Doummar, Diane; Mignot, Cyril; Anheim, Mathieu; Bernes, Saunder; Davis, Marie Y; Damon-Perrière, Nathalie; Degos, Bertrand; Grabli, David; Gras, Domitille; Hisama, Fuki M; Mackenzie, Katherine M; Swanson, Phillip D; Tranchant, Christine; Vidailhet, Marie; Winesett, Steven; Trouillard, Oriane; Amendola, Laura M; Dorschner, Michael O; Weiss, Michael; Eichler, Evan E; Torkamani, Ali; Roze, Emmanuel; Bird, Thomas D; Raskind, Wendy H

    2015-12-08

    To investigate the clinical spectrum and distinguishing features of adenylate cyclase 5 (ADCY5)-related dyskinesia and genotype-phenotype relationship. We analyzed ADCY5 in patients with choreiform or dystonic movements by exome or targeted sequencing. Suspected mosaicism was confirmed by allele-specific amplification. We evaluated clinical features in our 50 new and previously reported cases. We identified 3 new families and 12 new sporadic cases with ADCY5 mutations. These mutations cause a mixed hyperkinetic disorder that includes dystonia, chorea, and myoclonus, often with facial involvement. The movements are sometimes painful and show episodic worsening on a fluctuating background. Many patients have axial hypotonia. In 2 unrelated families, a p.A726T mutation in the first cytoplasmic domain (C1) causes a relatively mild disorder of prominent facial and hand dystonia and chorea. Mutations p.R418W or p.R418Q in C1, de novo in 13 individuals and inherited in 1, produce a moderate to severe disorder with axial hypotonia, limb hypertonia, paroxysmal nocturnal or diurnal dyskinesia, chorea, myoclonus, and intermittent facial dyskinesia. Somatic mosaicism is usually associated with a less severe phenotype. In one family, a p.M1029K mutation in the C2 domain causes severe dystonia, hypotonia, and chorea. The progenitor, whose childhood-onset episodic movement disorder almost disappeared in adulthood, was mosaic for the mutation. ADCY5-related dyskinesia is a childhood-onset disorder with a wide range of hyperkinetic abnormal movements. Genotype-specific correlations and mosaicism play important roles in the phenotypic variability. Recurrent mutations suggest particular functional importance of residues 418 and 726 in disease pathogenesis. © 2015 American Academy of Neurology.

  13. Affective Computing and the Impact of Gender and Age

    PubMed Central

    Rukavina, Stefanie; Gruss, Sascha; Hoffmann, Holger; Tan, Jun-Wen; Walter, Steffen; Traue, Harald C.

    2016-01-01

    Affective computing aims at the detection of users’ mental states, in particular, emotions and dispositions during human-computer interactions. Detection can be achieved by measuring multimodal signals, namely, speech, facial expressions and/or psychobiology. Over the past years, one major approach was to identify the best features for each signal using different classification methods. Although this is of high priority, other subject-specific variables should not be neglected. In our study, we analyzed the effect of gender, age, personality and gender roles on the extracted psychobiological features (derived from skin conductance level, facial electromyography and heart rate variability) as well as the influence on the classification results. In an experimental human-computer interaction, five different affective states with picture material from the International Affective Picture System and ULM pictures were induced. A total of 127 subjects participated in the study. Among all potentially influencing variables (gender has been reported to be influential), age was the only variable that correlated significantly with psychobiological responses. In summary, the conducted classification processes resulted in 20% classification accuracy differences according to age and gender, especially when comparing the neutral condition with four other affective states. We suggest taking age and gender specifically into account for future studies in affective computing, as these may lead to an improvement of emotion recognition accuracy. PMID:26939129

  14. Comparison of facial features of DiGeorge syndrome (DGS) due to deletion 10p13-10pter with DGS due to 22q11 deletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodship, J.; Lynch, S.; Brown, J.

    1994-09-01

    DiGeorge syndrome (DGS) is a congenital anomaly consisting of cardiac defects, aplasia or hypoplasia of the thymus and parathroid glands, and dysmorphic facial features. The majority of DGS cases have a submicroscopic deletion within chromosome 22q11. However there have been a number of reports of DGS in association with other chromosomal abnormalities including four cases with chromosome 10p deletions. We describe a further 10p deletion case and suggest that the facial features in children with DGS due to deletions of 10p are different from those associated with chromosome 22 deletions. The propositus was born at 39 weeks gestation to unrelatedmore » caucasian parents, birth weight 2580g (10th centile) and was noted to be dysmorphic and cyanosed shortly after birth. The main dysmorphic facial features were a broad nasal bridge with very short palpebral fissures. Echocardiography revealed a large subsortic VSD and overriding aorta. She had a low ionised calcium and low parathroid hormone level. T cell subsets and PHA response were normal. Abdominal ultrasound showed duplex kidneys and on further investigation she was found to have reflux and raised plasma creatinine. She had an anteriorly placed anus. Her karyotype was 46,XX,-10,+der(10)t(3;10)(p23;p13)mat. The dysmorphic facial features in this baby are strikingly similar to those noted by Bridgeman and Butler in child with DGS as the result of a 10p deletion and distinct from the face seen in children with DiGeorge syndrome resulting from interstitial chromosome 22 deletions.« less

  15. Real-time face and gesture analysis for human-robot interaction

    NASA Astrophysics Data System (ADS)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  16. Individual differences in Scanpaths correspond with serotonin transporter genotype and behavioral phenotype in rhesus monkeys (Macaca mulatta).

    PubMed

    Gibboni, Robert R; Zimmerman, Prisca E; Gothard, Katalin M

    2009-01-01

    Scanpaths (the succession of fixations and saccades during spontaneous viewing) contain information about the image but also about the viewer. To determine the viewer-dependent factors in the scanpaths of monkeys, we trained three adult males (Macaca mulatta) to look for 3 s at images of conspecific facial expressions with either direct or averted gaze. The subjects showed significant differences on four basic scanpath parameters (number of fixations, fixation duration, saccade length, and total scanpath length) when viewing the same facial expression/gaze direction combinations. Furthermore, we found differences between monkeys in feature preference and in the temporal order in which features were visited on different facial expressions. Overall, the between-subject variability was larger than the within- subject variability, suggesting that scanpaths reflect individual preferences in allocating visual attention to various features in aggressive, neutral, and appeasing facial expressions. Individual scanpath characteristics were brought into register with the genotype for the serotonin transporter regulatory gene (5-HTTLPR) and with behavioral characteristics such as expression of anticipatory anxiety and impulsiveness/hesitation in approaching food in the presence of a potentially dangerous object.

  17. Reverse engineering the face space: Discovering the critical features for face identification.

    PubMed

    Abudarham, Naphtali; Yovel, Galit

    2016-01-01

    How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.

  18. Sad Facial Expressions Increase Choice Blindness

    PubMed Central

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2018-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926

  19. Sad Facial Expressions Increase Choice Blindness.

    PubMed

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  20. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  1. Facial Masculinity: How the Choice of Measurement Method Enables to Detect Its Influence on Behaviour

    PubMed Central

    Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique

    2014-01-01

    Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods. PMID:25389770

  2. Facial masculinity: how the choice of measurement method enables to detect its influence on behaviour.

    PubMed

    Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique

    2014-01-01

    Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods.

  3. Signs of Facial Aging in Men in a Diverse, Multinational Study: Timing and Preventive Behaviors.

    PubMed

    Rossi, Anthony M; Eviatar, Joseph; Green, Jeremy B; Anolik, Robert; Eidelman, Michael; Keaney, Terrence C; Narurkar, Vic; Jones, Derek; Kolodziejczyk, Julia; Drinkwater, Adrienne; Gallagher, Conor J

    2017-11-01

    Men are a growing patient population in aesthetic medicine and are increasingly seeking minimally invasive cosmetic procedures. To examine differences in the timing of facial aging and in the prevalence of preventive facial aging behaviors in men by race/ethnicity. Men aged 18 to 75 years in the United States, Canada, United Kingdom, and Australia rated their features using photonumeric rating scales for 10 facial aging characteristics. Impact of race/ethnicity (Caucasian, black, Asian, Hispanic) on severity of each feature was assessed. Subjects also reported the frequency of dermatologic facial product use. The study included 819 men. Glabellar lines, crow's feet lines, and nasolabial folds showed the greatest change with age. Caucasian men reported more severe signs of aging and earlier onset, by 10 to 20 years, compared with Asian, Hispanic, and, particularly, black men. In all racial/ethnic groups, most men did not regularly engage in basic, antiaging preventive behaviors, such as use of sunscreen. Findings from this study conducted in a globally diverse sample may guide clinical discussions with men about the prevention and treatment of signs of facial aging, to help men of all races/ethnicities achieve their desired aesthetic outcomes.

  4. Facial biometry of Amazon indigenous people of the Xingu River - Perspectives on genetic and environmental contributions to variation in human facial morphology.

    PubMed

    Barbosa, M; Vieira, E P; Quintão, C C A; Normando, D

    2016-08-01

    To evaluate facial morphology of non-mixed indigenous people living in the Xingu region. Studies on these populations report that the total genetic diversity is as high as that observed for other continental populations. On the other hand, eating habits are different between indigenous and urban population, as indigenous people still have traditional habits. The sample consisted of 106 indigenous subjects, in permanent dentition stage, belonging to four groups: Arara-Laranjal (n = 35), Arara-Iriri (n = 20), Xikrin-Kaiapó (n = 24), and Assurini (n = 27). Standardized facial photographs were obtained, and fourteen measurements were analyzed. Intra- and intergroup homogeneities were examined by discriminant analysis, followed by anova and Kruskal-Wallis tests. Sexual dimorphism to each village was analyzed by Student's t-test or Mann-Whitney test, at p < 0.05. Significant facial differences were found between male and female, indicating that sex data should not be grouped for intergroup comparison. Discriminant analysis showed a large intergroup heterogeneity, while an intragroup homogeneity was found, especially for females. It was also observed that some morphological features of the face are specific to some villages, regardless of ethnicity. Facial morphological characteristics were strongly different among groups, even comparing villages from the same ethnicity. Furthermore, a low diversity within groups was observed. Our findings, supported by previous reports on genetics and eating habits in these populations, reinforce the role of the genetic determination on craniofacial morphology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  6. An equine pain face

    PubMed Central

    Gleerup, Karina B; Forkman, Björn; Lindegaard, Casper; Andersen, Pia H

    2015-01-01

    Objective The objective of this study was to investigate the existence of an equine pain face and to describe this in detail. Study design Semi-randomized, controlled, crossover trial. Animals Six adult horses. Methods Pain was induced with two noxious stimuli, a tourniquet on the antebrachium and topical application of capsaicin. All horses participated in two control trials and received both noxious stimuli twice, once with and once without an observer present. During all sessions their pain state was scored. The horses were filmed and the close-up video recordings of the faces were analysed for alterations in behaviour and facial expressions. Still images from the trials were evaluated for the presence of each of the specific pain face features identified from the video analysis. Results Both noxious challenges were effective in producing a pain response resulting in significantly increased pain scores. Alterations in facial expressions were observed in all horses during all noxious stimulations. The number of pain face features present on the still images from the noxious challenges were significantly higher than for the control trial (p = 0.0001). Facial expressions representative for control and pain trials were condensed into explanatory illustrations. During pain sessions with an observer present, the horses increased their contact-seeking behavior. Conclusions and clinical relevance An equine pain face comprising ‘low’ and/or ‘asymmetrical’ ears, an angled appearance of the eyes, a withdrawn and/or tense stare, mediolaterally dilated nostrils and tension of the lips, chin and certain facial muscles can be recognized in horses during induced acute pain. This description of an equine pain face may be useful for improving tools for pain recognition in horses with mild to moderate pain. PMID:25082060

  7. Three-dimensional analysis of facial morphology.

    PubMed

    Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng

    2014-09-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.

  8. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns

    PubMed Central

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi

    2016-01-01

    Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates. PMID:26761433

  10. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    PubMed

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  11. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  12. Noonan syndrome - a new survey.

    PubMed

    Tafazoli, Alireza; Eshraghi, Peyman; Koleti, Zahra Kamel; Abbaszadegan, Mohammadreza

    2017-02-01

    Noonan syndrome (NS) is an autosomal dominant disorder with vast heterogeneity in clinical and genetic features. Various symptoms have been reported for this abnormality such as short stature, unusual facial characteristics, congenital heart abnormalities, developmental complications, and an elevated tumor incidence rate. Noonan syndrome shares clinical features with other rare conditions, including LEOPARD syndrome, cardio-facio-cutaneous syndrome, Noonan-like syndrome with loose anagen hair, and Costello syndrome. Germline mutations in the RAS-MAPK (mitogen-activated protein kinase) signal transduction pathway are responsible for NS and other related disorders. Noonan syndrome diagnosis is primarily based on clinical features, but molecular testing should be performed to confirm it in patients. Due to the high number of genes associated with NS and other RASopathy disorders, next-generation sequencing is the best choice for diagnostic testing. Patients with NS also have higher risk for leukemia and specific solid tumors. Age-specific guidelines for the management of NS are available.

  13. Noonan syndrome – a new survey

    PubMed Central

    Tafazoli, Alireza; Eshraghi, Peyman; Koleti, Zahra Kamel

    2016-01-01

    Noonan syndrome (NS) is an autosomal dominant disorder with vast heterogeneity in clinical and genetic features. Various symptoms have been reported for this abnormality such as short stature, unusual facial characteristics, congenital heart abnormalities, developmental complications, and an elevated tumor incidence rate. Noonan syndrome shares clinical features with other rare conditions, including LEOPARD syndrome, cardio-facio-cutaneous syndrome, Noonan-like syndrome with loose anagen hair, and Costello syndrome. Germline mutations in the RAS-MAPK (mitogen-activated protein kinase) signal transduction pathway are responsible for NS and other related disorders. Noonan syndrome diagnosis is primarily based on clinical features, but molecular testing should be performed to confirm it in patients. Due to the high number of genes associated with NS and other RASopathy disorders, next-generation sequencing is the best choice for diagnostic testing. Patients with NS also have higher risk for leukemia and specific solid tumors. Age-specific guidelines for the management of NS are available. PMID:28144274

  14. Facial measurement differences between patients with schizophrenia and non-psychiatric controls.

    PubMed

    Compton, Michael T; Brudno, Jennifer; Kryda, Aimee D; Bollini, Annie M; Walker, Elaine F

    2007-07-01

    Several previous reports suggest that facial measurements in patients with schizophrenia differ from those of non-psychiatric controls. Because the face and brain develop in concert from the same ectodermal tissue, the study of quantitative craniofacial abnormalities may give clues to genetic and/or environmental factors predisposing to schizophrenia. Using a predominantly African American sample, the present research question was two-fold: (1) Do patients differ from controls in terms of a number of specific facial measurements?, and (2) Does cluster analysis based on these facial measurements reveal distinct facial morphologies that significantly discriminate patients from controls? Facial dimensions were measured in 73 patients with schizophrenia and related psychotic disorders (42 males and 31 females) and 69 non-psychiatric controls (35 males and 34 females) using a 25-cm head and neck caliper. Due to differences in facial dimensions by gender, separate independent samples Student's t-tests and logistic regression analyses were employed to discern differences in facial measures between the patient and control groups in women and men. Findings were further explored using cluster analysis. Given an association between age and some facial dimensions, the effect of age was controlled. In unadjusted bivariate tests, female patients differed from female controls on several facial dimensions, though male patients did not differ significantly from male controls for any facial measure. Controlling for age using logistic regression, female patients had a greater mid-facial depth (tragus-subnasale) compared to female controls; male patients had lesser upper facial (trichion-glabella) and lower facial (subnasale-gnathion) heights compared to male controls. Among females, cluster analysis revealed two facial morphologies that significantly discriminated patients from controls, though this finding was not evident when employing further cluster analyses using secondary distance measures. When the sample was restricted to African Americans, results were similar and consistent. These findings indicate that, in a predominantly African American sample, some facial measurements differ between patients with schizophrenia and non-psychiatric controls, and these differences appear to be gender-specific. Further research on gender-specific quantitative craniofacial measurement differences between cases and controls could suggest gender-specific differences in embryologic/fetal neurodevelopmental processes underpinning schizophrenia.

  15. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  16. Soft-tissue facial characteristics of attractive Chinese men compared to normal men

    PubMed Central

    Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing

    2015-01-01

    Objective: To compare the facial characteristics of attractive Chinese men with those of reference men. Materials and Methods: The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 “attractive” men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. Results: When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Conclusions: Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces. PMID:26221357

  17. Mimicking emotions: how 3-12-month-old infants use the facial expressions and eyes of a model.

    PubMed

    Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves

    2018-06-01

    While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.

  18. Sound-induced facial synkinesis following facial nerve paralysis.

    PubMed

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  19. Pet Face: Mechanisms Underlying Human-Animal Relationships

    PubMed Central

    Borgi, Marta; Cirulli, Francesca

    2016-01-01

    Accumulating behavioral and neurophysiological studies support the idea of infantile (cute) faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet) animals (i.e., dogs and cats) might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e., eyes gaze) as emotional and communicative signals is highlighted and discussed as regulating the human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of the social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but, more in general, as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing. PMID:27014120

  20. Pet Face: Mechanisms Underlying Human-Animal Relationships.

    PubMed

    Borgi, Marta; Cirulli, Francesca

    2016-01-01

    Accumulating behavioral and neurophysiological studies support the idea of infantile (cute) faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet) animals (i.e., dogs and cats) might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e., eyes gaze) as emotional and communicative signals is highlighted and discussed as regulating the human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of the social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but, more in general, as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing.

  1. Geometric morphometrics of male facial shape in relation to physical strength and perceived attractiveness, dominance, and masculinity.

    PubMed

    Windhager, Sonja; Schaefer, Katrin; Fink, Bernhard

    2011-01-01

    Evolutionary psychologists claim that women have adaptive preferences for specific male physical traits. Physical strength may be one of those traits, because recent research suggests that women rate faces of physically strong men as more masculine, dominant, and attractive. Yet, previous research has been limited in its ability to statistically map specific male facial shapes and features to corresponding physical measures (e.g., strength) and ratings (e.g., attractiveness). The association of handgrip strength (together with measures of shoulder width, body height, and body fat) and women's ratings of male faces (concerning dominance, masculinity, and attractiveness) were studied in a sample of 26 Caucasian men (aged 18-32 years). Geometric morphometrics was used to statistically assess the covariation of male facial shape with these measures. Statistical results were visualized with thin-plate spline deformation grids along with image unwarping and image averaging. Handgrip strength together with shoulder width, body fat, dominance, and masculinity loaded positively on the first dimension of covariation with facial shape (explaining 72.6%, P < 0.05). These measures were related to rounder faces with wider eyebrows and a prominent jaw outline while highly attractive and taller men had longer, narrower jaws and wider/fuller lips. Male physical strength was more strongly associated with changes in face shape that relate to perceived masculinity and dominance than to attractiveness. Our study adds to the growing evidence that attractiveness and dominance/masculinity may reflect different aspects of male mate quality. Copyright © 2011 Wiley-Liss, Inc.

  2. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  3. Influence of facial skin ageing characteristics on the perceived age in a Russian female population.

    PubMed

    Merinville, E; Grennan, G Z; Gillbro, J M; Mathieu, J; Mavon, A

    2015-10-01

    The desire for a youthful look remains a powerful motivator in the purchase of cosmetics by women globally. To develop an anti-ageing solution that targets the need of end consumers, it is critical to understand which signs of ageing really matter to them and which influence their age perception. To date, such research has not been performed in a Russian population. The aim of this work was to identify the signs of ageing that contribute the most to an 'older' or 'younger' look for Russian women aged 40 years old and above. The age of 203 Russian female volunteers was estimated from their standard photographs by a total of 629 female naïve assessors aged 20-65 years old. Perceived age data were related to 23 facial skin features previously measured using linear correlation coefficients. Differences in average severity of the correlating skin ageing features were evaluated between women perceived older and women perceived younger than their chronological age. Volunteers' responses to a ranking question on their key ageing skin concerns previously collected were analysed to provide an additional view on facial ageing from the consumer perspective. Nine facial skin ageing features were found to correlate the most with perceived age out of the 23 measured. Such results showed the importance of wrinkles in the upper part of the face (crow's feet, glabellar, under eye and forehead wrinkles), but also wrinkles in the lower half of the face associated with facial sagging (upper lip, nasolabial fold). Sagging was confirmed of key importance to female volunteers aged 41-65 years old who were mostly concerned by the sagging of their jawline, ahead of under eye and crow's feet wrinkle. The severity of hyperpigmented spots, red and brown, was also found to contribute to perceived age although to a weaker extent. By providing a clear view on the signs of ageing really matter to Russian women who are aged 40 years old and above, this research offers key information for the development of relevant anti-ageing solutions specifically targeting their needs and their desire to achieve younger-looking skin. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. Facial bacterial infections: folliculitis.

    PubMed

    Laureano, Ana Cristina; Schwartz, Robert A; Cohen, Philip J

    2014-01-01

    Facial bacterial infections are most commonly caused by infections of the hair follicles. Wherever pilosebaceous units are found folliculitis can occur, with the most frequent bacterial culprit being Staphylococcus aureus. We review different origins of facial folliculitis, distinguishing bacterial forms from other infectious and non-infectious mimickers. We distinguish folliculitis from pseudofolliculitis and perifolliculitis. Clinical features, etiology, pathology, and management options are also discussed. Copyright © 2014. Published by Elsevier Inc.

  5. Facial Redness Increases Men's Perceived Healthiness and Attractiveness.

    PubMed

    Thorstenson, Christopher A; Pazda, Adam D; Elliot, Andrew J; Perrett, David I

    2017-06-01

    Past research has shown that peripheral and facial redness influences perceptions of attractiveness for men viewing women. The current research investigated whether a parallel effect is present when women rate men with varying facial redness. In four experiments, women judged the attractiveness of men's faces, which were presented with varying degrees of redness. We also examined perceived healthiness and other candidate variables as mediators of the red-attractiveness effect. The results show that facial redness positively influences ratings of men's attractiveness. Additionally, perceived healthiness was documented as a mediator of this effect, independent of other potential mediator variables. The current research emphasizes facial coloration as an important feature of social judgments.

  6. What's in a face? The role of skin tone, facial physiognomy, and color presentation mode of facial primes in affective priming effects.

    PubMed

    Stepanova, Elena V; Strube, Michael J

    2012-01-01

    Participants (N = 106) performed an affective priming task with facial primes that varied in their skin tone and facial physiognomy, and, which were presented either in color or in gray-scale. Participants' racial evaluations were more positive for Eurocentric than for Afrocentric physiognomy faces. Light skin tone faces were evaluated more positively than dark skin tone faces, but the magnitude of this effect depended on the mode of color presentation. The results suggest that in affective priming tasks, faces might not be processed holistically, and instead, visual features of facial priming stimuli independently affect implicit evaluations.

  7. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  8. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    PubMed Central

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  9. The Influence of Changes in Size and Proportion of Selected Facial Features (Eyes, Nose, Mouth) on Assessment of Similarity between Female Faces.

    PubMed

    Lewandowski, Zdzisław

    2015-09-01

    The project aimed at finding the answers to the following two questions: to what extent does a change in size, height or width of the selected facial features influence the assessment of likeness between an original female composite portrait and a modified one? And how does the sex of the person who judges the images have an impact on the perception of likeness of facial features? The first stage of the project consisted of creating the image of the averaged female faces. Then the basic facial features like eyes, nose and mouth were cut out of the averaged face and each of these features was transformed in three ways: its size was changed by reduction or enlargement, its height was modified through reduction or enlargement of the above-mentioned features and its width was altered through widening or narrowing. In each out of six feature alternation methods, intensity of modification reached up to 20% of the original size with changes every 2%. The features altered in such a way were again stuck onto the original faces and retouched. The third stage consisted of the assessment, performed by the judges of both sexes, of the extent of likeness between the averaged composite portrait (without any changes) and the modified portraits. The results indicate that there are significant differences in the assessment of likeness of the portraits with some features modified to the original ones. The images with changes in the size and height of the nose received the lowest scores on the likeness scale, which indicates that these changes were perceived by the subjects as the most important. The photos with changes in the height of lip vermillion thickness (the lip height), lip width and the height and width of eye slit, in turn, received high scores of likeness, in spite of big changes, which signifies that these modifications were perceived as less important when compared to the other features investigated.

  10. How do schizophrenia patients use visual information to decode facial emotion?

    PubMed

    Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F

    2011-09-01

    Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.

  11. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  12. Facial nerve hemangiomas: vascular tumors or malformations?

    PubMed

    Benoit, Margo McKenna; North, Paula E; McKenna, Michael J; Mihm, Martin C; Johnson, Matthew M; Cunningham, Michael J

    2010-01-01

    To reclassify facial nerve hemangiomas in the context of presently accepted vascular lesion nomenclature by examining histology and immunohistochemical markers. Cohort analysis of patients diagnosed with a facial nerve hemangioma between 1990 and 2008. Collaborative analysis at a specialty hospital and a major academic hospital. Seven subjects were identified on composite review of office charts, a pathology database spanning both institutions, and an encrypted patient registry. Clinical data were compiled, and hematoxylin-eosin-stained specimens were reviewed. For six patients, archived pathological tissue was available for immunohistochemical evaluation of markers specific for infantile hemangioma (glucose transporter protein isoform 1 [GLUT1] and Lewis Y antigen) and for lymphatic endothelial cells (podoplanin). All patients clinically presented with slowly progressive facial weakness at a mean age of 45 years without prior symptomatology. Hemotoxylin-eosin-stained histopathological slides showed irregularly shaped, dilated lesional vessels with flattened endothelial cells, scant smooth muscle, and no internal elastic lamina. Both podoplanin staining for lymphatic endothelial cells and GLUT1 and LewisY antigen staining for infantile hemangioma endothelial cells were negative in lesional vessels in all specimens for which immunohistochemical analysis was performed. Lesions of the geniculate ganglion historically referred to as "hemangiomas" do not demonstrate clinical, histopathological, or immunohistochemical features consistent with a benign vascular tumor, but instead are consistent with venous malformation. We propose that these lesions be classified as "venous vascular malformations of the facial nerve." This nomenclature should more accurately predict clinical behavior and guide therapeutic interventions.

  13. Perceived differences between chimpanzee (Pan troglodytes) and human (Homo sapiens) facial expressions are related to emotional interpretation.

    PubMed

    Waller, Bridget M; Bard, Kim A; Vick, Sarah-Jane; Smith Pasqualini, Marcia C

    2007-11-01

    Human face perception is a finely tuned, specialized process. When comparing faces between species, therefore, it is essential to consider how people make these observational judgments. Comparing facial expressions may be particularly problematic, given that people tend to consider them categorically as emotional signals, which may affect how accurately specific details are processed. The bared-teeth display (BT), observed in most primates, has been proposed as a homologue of the human smile (J. A. R. A. M. van Hooff, 1972). In this study, judgments of similarity between BT displays of chimpanzees (Pan troglodytes) and human smiles varied in relation to perceived emotional valence. When a chimpanzee BT was interpreted as fearful, observers tended to underestimate the magnitude of the relationship between certain features (the extent of lip corner raise) and human smiles. These judgments may reflect the combined effects of categorical emotional perception, configural face processing, and perceptual organization in mental imagery and may demonstrate the advantages of using standardized observational methods in comparative facial expression research. Copyright 2007 APA.

  14. Sunglass detection method for automation of video surveillance system

    NASA Astrophysics Data System (ADS)

    Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad

    2018-04-01

    Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.

  15. Adaptation effects to attractiveness of face photographs and art portraits are domain-specific

    PubMed Central

    Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph

    2013-01-01

    We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690

  16. Gelotophobia and the Challenges of Implementing Laughter into Virtual Agents Interactions

    PubMed Central

    Ruch, Willibald F.; Platt, Tracey; Hofmann, Jennifer; Niewiadomski, Radosław; Urbain, Jérôme; Mancini, Maurizio; Dupont, Stéphane

    2014-01-01

    This study investigated which features of AVATAR laughter are perceived threatening for individuals with a fear of being laughed at (gelotophobia), and individuals with no gelotophobia. Laughter samples were systematically varied (e.g., intensity, laughter pitch, and energy for the voice, intensity of facial actions of the face) in three modalities: animated facial expressions, synthesized auditory laughter vocalizations, and motion capture generated puppets displaying laughter body movements. In the online study 123 adults completed, the GELOPH <15 > (Ruch and Proyer, 2008a,b) and rated randomly presented videos of the three modalities for how malicious, how friendly, how real the laughter was (0 not at all to 8 extremely). Additionally, an open question asked which markers led to the perception of friendliness/maliciousness. The current study identified features in all modalities of laughter stimuli that were perceived as malicious in general, and some that were gelotophobia specific. For facial expressions of AVATARS, medium intensity laughs triggered highest maliciousness in the gelotophobes. In the auditory stimuli, the fundamental frequency modulations and the variation in intensity were indicative of maliciousness. In the body, backwards and forward movements and rocking vs. jerking movements distinguished the most malicious from the least malicious laugh. From the open answers, the shape and appearance of the lips curling induced feelings that the expression was malicious for non-gelotophobes and that the movement round the eyes, elicited the face to appear as friendly. This was opposite for gelotophobes. Gelotophobia savvy AVATARS should be of high intensity, containing lip and eye movements and be fast, non-repetitive voiced vocalization, variable and of short duration. It should not contain any features that indicate a down-regulation in the voice or body, or indicate voluntary/cognitive modulation. PMID:25477803

  17. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    PubMed

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks.

  18. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks. PMID:23185595

  19. Sutural growth restriction and modern human facial evolution: an experimental study in a pig model

    PubMed Central

    Holton, Nathan E; Franciscus, Robert G; Nieves, Mary Ann; Marshall, Steven D; Reimer, Steven B; Southard, Thomas E; Keller, John C; Maddux, Scott D

    2010-01-01

    Facial size reduction and facial retraction are key features that distinguish modern humans from archaic Homo. In order to more fully understand the emergence of modern human craniofacial form, it is necessary to understand the underlying evolutionary basis for these defining characteristics. Although it is well established that the cranial base exerts considerable influence on the evolutionary and ontogenetic development of facial form, less emphasis has been placed on developmental factors intrinsic to the facial skeleton proper. The present analysis was designed to assess anteroposterior facial reduction in a pig model and to examine the potential role that this dynamic has played in the evolution of modern human facial form. Ten female sibship cohorts, each consisting of three individuals, were allocated to one of three groups. In the experimental group (n = 10), microplates were affixed bilaterally across the zygomaticomaxillary and frontonasomaxillary sutures at 2 months of age. The sham group (n = 10) received only screw implantation and the controls (n = 10) underwent no surgery. Following 4 months of post-surgical growth, we assessed variation in facial form using linear measurements and principal components analysis of Procrustes scaled landmarks. There were no differences between the control and sham groups; however, the experimental group exhibited a highly significant reduction in facial projection and overall size. These changes were associated with significant differences in the infraorbital region of the experimental group including the presence of an infraorbital depression and an inferiorly and coronally oriented infraorbital plane in contrast to a flat, superiorly and sagittally infraorbital plane in the control and sham groups. These altered configurations are markedly similar to important additional facial features that differentiate modern humans from archaic Homo, and suggest that facial length restriction via rigid plate fixation is a potentially useful model to assess the developmental factors that underlie changing patterns in craniofacial form associated with the emergence of modern humans. PMID:19929910

  20. When your face describes your memories: facial expressions during retrieval of autobiographical memories.

    PubMed

    El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis

    2018-05-11

    Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.

  1. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    NASA Astrophysics Data System (ADS)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  2. Impaired recognition of happy facial expressions in bipolar disorder.

    PubMed

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  3. Humor drawings evoked temporal and spectral EEG processes

    PubMed Central

    Kuo, Hsien-Chu; Chuang, Shang-Wen

    2017-01-01

    Abstract The study aimed to explore the humor processing elicited through the manipulation of artistic drawings. Using the Comprehension–Elaboration Theory of humor as the main research background, the experiment manipulated the head portraits of celebrities based on the independent variables of facial deformation (large/small) and addition of affective features (positive/negative). A 64-channel electroencephalography was recorded in 30 participants while viewing the incongruous drawings of celebrities. The electroencephalography temporal and spectral responses were measured during the three stages of humor which included incongruity detection, incongruity comprehension and elaboration of humor. Analysis of event-related potentials indicated that for humorous vs non-humorous drawings, facial deformation and the addition of affective features significantly affected the degree of humor elicited, specifically: large > small deformation; negative > positive affective features. The N170, N270, N400, N600-800 and N900-1200 components showed significant differences, particularly in the right prefrontal and frontal regions. Analysis of event-related spectral perturbation showed significant differences in the theta band evoked in the anterior cingulate cortex, parietal region and posterior cingulate cortex; and in the alpha and beta bands in the motor areas. These regions are involved in emotional processing, memory retrieval, and laughter and feelings of amusement induced by elaboration of the situation. PMID:28402573

  4. A longitudinal study of facial growth of Southern Chinese in Hong Kong: Comprehensive photogrammetric analyses.

    PubMed

    Wen, Yi Feng; Wong, Hai Ming; McGrath, Colman Patrick

    2017-01-01

    Existing studies on facial growth were mostly cross-sectional in nature and only a limited number of facial measurements were investigated. The purposes of this study were to longitudinally investigate facial growth of Chinese in Hong Kong from 12 through 15 to 18 years of age and to compare the magnitude of growth changes between genders. Standardized frontal and lateral facial photographs were taken from 266 (149 females and 117 males) and 265 (145 females and 120 males) participants, respectively, at all three age levels. Linear and angular measurements, profile inclinations, and proportion indices were recorded. Statistical analyses were performed to investigate growth changes of facial features. Comparisons were made between genders in terms of the magnitude of growth changes from ages 12 to 15, 15 to 18, and 12 to 18 years. For the overall face, all linear measurements increased significantly (p < 0.05) except for height of the lower profile in females (p = 0.069) and width of the face in males (p = 0.648). In both genders, the increase in height of eye fissure was around 10% (p < 0.001). There was significant decrease in nasofrontal angle (p < 0.001) and increase in nasofacial angle (p < 0.001) in both genders and these changes were larger in males. Vermilion-total upper lip height index remained stable in females (p = 0.770) but increased in males (p = 0.020). Nasofrontal angle (effect size: 0.55) and lower vermilion contour index (effect size: 0.59) demonstrated large magnitude of gender difference in the amount of growth changes from 12 to 18 years. Growth changes of facial features and gender differences in the magnitude of facial growth were determined. The findings may benefit different clinical specialties and other nonclinical fields where facial growth are of interest.

  5. [INVITED] Non-intrusive optical imaging of face to probe physiological traits in Autism Spectrum Disorder

    NASA Astrophysics Data System (ADS)

    Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.

    2016-03-01

    Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.

  6. Pressure Bearing Device Affects Extraction Socket Remodeling of Maxillary Anterior Tooth. A Prospective Clinical Trial.

    PubMed

    Jiang, Xi; Zhang, Yu; Chen, Bo; Lin, Ye

    2017-04-01

    Extraction socket remodeling and ridge preservation strategies have been extensively explored. To evaluate the efficacy of applying a micro-titanium stent as a pressure bearing device on extraction socket remodeling of maxillary anterior tooth. Twenty-four patients with a extraction socket of maxillary incisor were treated with spontaneous healing (control group) or by applying a micro-titanium stent as a facial pressure bearing device over the facial bone wall (test group). Two virtual models obtained from cone beam computed tomography data before extraction and 4 months after healing were 3-dimenionally superimposed. Facial bone wall resorption, extraction socket remodeling features and ridge width preservation rate were determined and compared between the groups. Thin facial bone wall resulted in marked resorption in both groups. The greatest palatal shifting distance of facial bone located at the coronal level in the control group, but middle level in the test group. Compared with the original extraction socket, 87.61 ± 5.88% ridge width was preserved in the test group and 55.09 ± 14.46% in the control group. Due to the facial pressure bearing property, the rigid micro-titanium stent might preserve the ridge width and alter the resorption features of extraction socket. © 2016 Wiley Periodicals, Inc.

  7. Amygdala lesions in rhesus macaques decrease attention to threat

    PubMed Central

    Dal Monte, Olga; Costa, Vincent D.; Noble, Pamela L.; Murray, Elisabeth A.; Averbeck, Bruno B.

    2015-01-01

    Evidence from animal and human studies has suggested that the amygdala plays a role in detecting threat and in directing attention to the eyes. Nevertheless, there has been no systematic investigation of whether the amygdala specifically facilitates attention to the eyes or whether other features can also drive attention via amygdala processing. The goal of the present study was to examine the effects of amygdala lesions in rhesus monkeys on attentional capture by specific facial features, as well as gaze patterns and changes in pupil dilation during free viewing. Here we show reduced attentional capture by threat stimuli, specifically the mouth, and reduced exploration of the eyes in free viewing in monkeys with amygdala lesions. Our findings support a role for the amygdala in detecting threat signals and in directing attention to the eye region of faces when freely viewing different expressions. PMID:26658670

  8. Anxiety from a Phylogenetic Perspective: Is there a Qualitative Difference between Human and Animal Anxiety?

    PubMed Central

    Belzung, Catherine; Philippot, Pierre

    2007-01-01

    A phylogenetic approach to anxiety is proposed. The different facets of human anxiety and their presence at different levels of the phylum are examined. All organisms, including unicellular such as protozoan, can display a specific reaction to danger. The mechanisms enabling the appraisal of harmful stimuli are fully present in insects. In higher invertebrates, fear is associated with a specific physiological response. In mammals, anxiety is accompanied by specific cognitive responses. The expression of emotions diversifies in higher vertebrates, only primates displaying facial expressions. Finally, autonoetic consciousness, a feature essential for human anxiety, appears only in great apes. This evolutive feature parallels the progress in the complexity of the logistic systems supporting it (e.g., the vegetative and central nervous systems). The ability to assess one's coping potential, the diversification of the anxiety responses, and autonoetic consciousness seem relevant markers in a phylogenetic perspective. PMID:17641735

  9. FaceTOON: a unified platform for feature-based cartoon expression generation

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine

    2008-02-01

    This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.

  10. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling.

    PubMed

    Batool, Nazre; Chellappa, Rama

    2014-09-01

    Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.

  11. Sotos syndrome: An interesting disorder with gigantism.

    PubMed

    Nalini, A; Biswas, Arundhati

    2008-07-01

    We report the case of a 16-year-old boy diagnosed to have Sotos syndrome, with rare association of bilateral primary optic atrophy and epilepsy. He presented with accelerated linear growth, facial gestalt, distinctive facial features, seizures and progressive diminution of vision in both eyes. He had features of gigantism from early childhood. An MRI showed that brain and endocrine functions were normal. This case is of interest, as we have to be aware of this not so rare disorder. In addition to the classic features, there were two unusual associations with Sotos syndrome in the patient.

  12. Sotos syndrome: An interesting disorder with gigantism

    PubMed Central

    Nalini, A.; Biswas, Arundhati

    2008-01-01

    We report the case of a 16-year-old boy diagnosed to have Sotos syndrome, with rare association of bilateral primary optic atrophy and epilepsy. He presented with accelerated linear growth, facial gestalt, distinctive facial features, seizures and progressive diminution of vision in both eyes. He had features of gigantism from early childhood. An MRI showed that brain and endocrine functions were normal. This case is of interest, as we have to be aware of this not so rare disorder. In addition to the classic features, there were two unusual associations with Sotos syndrome in the patient. PMID:19893668

  13. Patterns of Eye Movements When Observers Judge Female Facial Attractiveness

    PubMed Central

    Zhang, Yan; Wang, Xiaoying; Wang, Juan; Zhang, Lili; Xiang, Yu

    2017-01-01

    The purpose of the present study is to explore the fixed model for the explicit judgments of attractiveness and infer which features are important to judge the facial attractiveness. Behavioral studies on the perceptual cues for female facial attractiveness implied three potentially important features: averageness, symmetry, and sexual dimorphy. However, these studies did not explained which regions of facial images influence the judgments of attractiveness. Therefore, the present research recorded the eye movements of 24 male participants and 19 female participants as they rated a series of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking. PMID:29209242

  14. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Patterns of Eye Movements When Observers Judge Female Facial Attractiveness.

    PubMed

    Zhang, Yan; Wang, Xiaoying; Wang, Juan; Zhang, Lili; Xiang, Yu

    2017-01-01

    The purpose of the present study is to explore the fixed model for the explicit judgments of attractiveness and infer which features are important to judge the facial attractiveness. Behavioral studies on the perceptual cues for female facial attractiveness implied three potentially important features: averageness, symmetry, and sexual dimorphy. However, these studies did not explained which regions of facial images influence the judgments of attractiveness. Therefore, the present research recorded the eye movements of 24 male participants and 19 female participants as they rated a series of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking.

  16. Avoidant decision making in social anxiety: the interaction of angry faces and emotional responses

    PubMed Central

    Pittig, Andre; Pawlikowski, Mirko; Craske, Michelle G.; Alpers, Georg W.

    2014-01-01

    Recent research indicates that angry facial expressions are preferentially processed and may facilitate automatic avoidance response, especially in socially anxious individuals. However, few studies have examined whether this bias also expresses itself in more complex cognitive processes and behavior such as decision making. We recently introduced a variation of the Iowa Gambling Task which allowed us to document the influence of task-irrelevant emotional cues on rational decision making. The present study used a modified gambling task to investigate the impact of angry facial expressions on decision making in 38 individuals with a wide range of social anxiety. Participants were to find out which choices were (dis-) advantageous to maximize overall gain. To create a decision conflict between approach of reward and avoidance of fear-relevant angry faces, advantageous choices were associated with angry facial expressions, whereas disadvantageous choices were associated with happy facial expressions. Results indicated that higher social avoidance predicted less advantageous decisions in the beginning of the task, i.e., when contingencies were still uncertain. Interactions with specific skin conductance responses further clarified that this initial avoidance only occurred in combination with elevated responses before choosing an angry facial expressions. In addition, an interaction between high trait anxiety and elevated responses to early losses predicted faster learning of an advantageous strategy. These effects were independent of intelligence, general risky decision-making, self-reported state anxiety, and depression. Thus, socially avoidant individuals who respond emotionally to angry facial expressions are more likely to show avoidance of these faces under uncertainty. This novel laboratory paradigm may be an appropriate analog for central features of social anxiety. PMID:25324792

  17. Avoidant decision making in social anxiety: the interaction of angry faces and emotional responses.

    PubMed

    Pittig, Andre; Pawlikowski, Mirko; Craske, Michelle G; Alpers, Georg W

    2014-01-01

    Recent research indicates that angry facial expressions are preferentially processed and may facilitate automatic avoidance response, especially in socially anxious individuals. However, few studies have examined whether this bias also expresses itself in more complex cognitive processes and behavior such as decision making. We recently introduced a variation of the Iowa Gambling Task which allowed us to document the influence of task-irrelevant emotional cues on rational decision making. The present study used a modified gambling task to investigate the impact of angry facial expressions on decision making in 38 individuals with a wide range of social anxiety. Participants were to find out which choices were (dis-) advantageous to maximize overall gain. To create a decision conflict between approach of reward and avoidance of fear-relevant angry faces, advantageous choices were associated with angry facial expressions, whereas disadvantageous choices were associated with happy facial expressions. Results indicated that higher social avoidance predicted less advantageous decisions in the beginning of the task, i.e., when contingencies were still uncertain. Interactions with specific skin conductance responses further clarified that this initial avoidance only occurred in combination with elevated responses before choosing an angry facial expressions. In addition, an interaction between high trait anxiety and elevated responses to early losses predicted faster learning of an advantageous strategy. These effects were independent of intelligence, general risky decision-making, self-reported state anxiety, and depression. Thus, socially avoidant individuals who respond emotionally to angry facial expressions are more likely to show avoidance of these faces under uncertainty. This novel laboratory paradigm may be an appropriate analog for central features of social anxiety.

  18. [Endoscopic treatment of small osteoma of nasal sinuses manifested as nasal and facial pain].

    PubMed

    Li, Yu; Zheng, Tianqi; Li, Zhong; Deng, Hongyuan; Guo, Chaoxian

    2015-12-01

    To discuss the clinical features, diagnosis and endoscopic surgical intervention for small steoma of nasal sinuses causing nasal and facial pain. A retrospective review was performed on 21 patients with nasal and facial pain caused by small osteoma of nasal sinuses, and nasal endoscopic surgery was included in the treatment of all cases. The nasal and facial pain of all the patients was relieved. Except for one ase exhibiting periorbital bruise after operation, the other patients showed no postoperative complications. Nasal and facial pain caused by small osteoma of nasal sinuses was clinically rare, mostly due to the neuropathic pain of nose and face caused by local compression resulting from the expansion of osteoma. Early diagnosis and operative treatment can significantly relieve nasal and facial pain.

  19. Recovering faces from memory: the distracting influence of external facial features.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H; Hancock, Peter J B

    2012-06-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.

  20. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    PubMed

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  2. Joint Patch and Multi-label Learning for Facial Action Unit Detection

    PubMed Central

    Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang

    2016-01-01

    The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243

  3. Alagille syndrome in a Vietnamese cohort: mutation analysis and assessment of facial features.

    PubMed

    Lin, Henry C; Le Hoang, Phuc; Hutchinson, Anne; Chao, Grace; Gerfen, Jennifer; Loomes, Kathleen M; Krantz, Ian; Kamath, Binita M; Spinner, Nancy B

    2012-05-01

    Alagille syndrome (ALGS, OMIM #118450) is an autosomal dominant disorder that affects multiple organ systems including the liver, heart, eyes, vertebrae, and face. ALGS is caused by mutations in one of two genes in the Notch Signaling Pathway, Jagged1 (JAG1) or NOTCH2. In this study, analysis of 21 Vietnamese ALGS individuals led to the identification of 19 different mutations (18 JAG1 and 1 NOTCH2), 17 of which are novel, including the third reported NOTCH2 mutation in Alagille Syndrome. The spectrum of JAG1 mutations in the Vietnamese patients is similar to that previously reported, including nine frameshift, three missense, two splice site, one nonsense, two whole gene, and one partial gene deletion. The missense mutations are all likely to be disease causing, as two are loss of cysteines (C22R and C78G) and the third creates a cryptic splice site in exon 9 (G386R). No correlation between genotype and phenotype was observed. Assessment of clinical phenotype revealed that skeletal manifestations occur with a higher frequency than in previously reported Alagille cohorts. Facial features were difficult to assess and a Vietnamese pediatric gastroenterologist was only able to identify the facial phenotype in 61% of the cohort. To assess the agreement among North American dysmorphologists at detecting the presence of ALGS facial features in the Vietnamese patients, 37 clinical dysmorphologists evaluated a photographic panel of 20 Vietnamese children with and without ALGS. The dysmorphologists were unable to identify the individuals with ALGS in the majority of cases, suggesting that evaluation of facial features should not be used in the diagnosis of ALGS in this population. This is the first report of mutations and phenotypic spectrum of ALGS in a Vietnamese population. Copyright © 2012 Wiley Periodicals, Inc.

  4. The morphometrics of "masculinity" in human faces.

    PubMed

    Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B; Schaefer, Katrin

    2015-01-01

    In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features--the masculinity shape scores--were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity.

  5. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  6. In-the-wild facial expression recognition in extreme poses

    NASA Astrophysics Data System (ADS)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  7. Parent and child ratings of satisfaction with speech and facial appearance in Flemish pre-pubescent boys and girls with unilateral cleft lip and palate.

    PubMed

    Van Lierde, K M; Dhaeseleer, E; Luyten, A; Van De Woestijne, K; Vermeersch, H; Roche, N

    2012-02-01

    The purpose of this controlled study is to determine satisfaction with speech and facial appearance in Flemish pre-pubescent children with unilateral cleft lip and palate. Forty-three subjects with unilateral cleft lip and palate and 43 age and gender matched controls participated in this study. The Cleft Evaluation Profile was used to assess the perceived satisfaction for individual features related to cleft care. Both the cleft palate subjects and their parents were satisfied with the speech and facial appearance. The Pearson χ(2) test revealed significant difference between the cleft palate and the control group regarding hearing, nasal aesthetics and function, and the appearance of the lip. An in depth analysis of well specified speech characteristics revealed that children with clefts and their parents significantly more often reported the presence of an articulation, voice and resonance disorder and experienced /s/ /r/ /t/ and /d/ as the most difficult consonants. To what extent the incorporation of specific motor oriented oral speech techniques regarding the realisation of specific consonants, attention to vocal and ear care, and the recommendation of secondary velopharyngeal surgery, with the incorporation of primary correction of the cleft nose deformity simultaneously with primary lip closure, will aid these patients are future research subjects. Copyright © 2011 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Facial morphology and children's categorization of facial expressions of emotions: a comparison between Asian and Caucasian faces.

    PubMed

    Gosselin, P; Larocque, C

    2000-09-01

    The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.

  9. A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.

    PubMed

    Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito

    2017-12-01

    Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or <0.5. We observed that the gene expression in Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.

  10. Iatrogenic occlusion of the ophthalmic artery after cosmetic facial filler injections: a national survey by the Korean Retina Society.

    PubMed

    Park, Kyu Hyung; Kim, Yong-Kyu; Woo, Se Joon; Kang, Se Woong; Lee, Won Ki; Choi, Kyung Seek; Kwak, Hyung Woo; Yoon, Ill Han; Huh, Kuhl; Kim, Jong Woo

    2014-06-01

    Iatrogenic occlusion of the ophthalmic artery and its branches is a rare but devastating complication of cosmetic facial filler injections. To investigate clinical and angiographic features of iatrogenic occlusion of the ophthalmic artery and its branches caused by cosmetic facial filler injections. Data from 44 patients with occlusion of the ophthalmic artery and its branches after cosmetic facial filler injections were obtained retrospectively from a national survey completed by members of the Korean Retina Society from 27 retinal centers. Clinical features were compared between patients grouped by angiographic findings and injected filler material. Visual prognosis and its relationship to angiographic findings and injected filler material. Ophthalmic artery occlusion was classified into 6 types according to angiographic findings. Twenty-eight patients had diffuse retinal and choroidal artery occlusions (ophthalmic artery occlusion, generalized posterior ciliary artery occlusion, and central retinal artery occlusion). Sixteen patients had localized occlusions (localized posterior ciliary artery occlusion, branch retinal artery occlusion, and posterior ischemic optic neuropathy). Patients with diffuse occlusions showed worse initial and final visual acuity and less visual gain compared with those having localized occlusions. Patients receiving autologous fat injections (n = 22) had diffuse ophthalmic artery occlusions, worse visual prognosis, and a higher incidence of combined brain infarction compared with patients having hyaluronic acid injections (n = 13). Clinical features of iatrogenic occlusion of the ophthalmic artery and its branches following cosmetic facial filler injections were diverse according to the location and extent of obstruction and the injected filler material. Autologous fat injections were associated with a worse visual prognosis and a higher incidence of combined cerebral infarction. Extreme caution and care should be taken during these injections, and physicians should be aware of a diverse spectrum of complications following cosmetic facial filler injections.

  11. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    PubMed

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  12. Hennekam lymphangiectasia syndrome

    PubMed Central

    Lakshminarayana, G.; Mathew, A.; Rajesh, R.; Kurien, G.; Unni, V. N.

    2011-01-01

    Hennekam lymphangiectasia syndrome is a rare disorder comprising of intestinal and renal lymphangiectasia, dysmorphic facial appearance and mental retardation. The facial features include hypertelorism with a wide, flat nasal bridge, epicanthic folds, small mouth and small ears. We describe a case of a multigravida with bad obstetric history and characteristic facial and dental anomalies and bilateral renal lymphangiectasia. To our knowledge this is the first case of Hennekam lymphangiectasia syndrome with anodontia to be reported from India. PMID:22022089

  13. A Diagnosis to Consider in an Adult Patient with Facial Features and Intellectual Disability: Williams Syndrome.

    PubMed

    Doğan, Özlem Akgün; Şimşek Kiper, Pelin Özlem; Utine, Gülen Eda; Alikaşifoğlu, Mehmet; Boduroğlu, Koray

    2017-03-01

    Williams syndrome (OMIM #194050) is a rare, well-recognized, multisystemic genetic condition affecting approximately 1/7,500 individuals. There are no marked regional differences in the incidence of Williams syndrome. The syndrome is caused by a hemizygous deletion of approximately 28 genes, including ELN on chromosome 7q11.2. Prenatal-onset growth retardation, distinct facial appearance, cardiovascular abnormalities, and unique hypersocial behavior are among the most common clinical features. Here, we report the case of a patient referred to us with distinct facial features and intellectual disability, who was diagnosed with Williams syndrome at the age of 37 years. Our aim is to increase awareness regarding the diagnostic features and complications of this recognizable syndrome among adult health care providers. Williams syndrome is usually diagnosed during infancy or childhood, but in the absence of classical findings, such as cardiovascular anomalies, hypercalcemia, and cognitive impairment, the diagnosis could be delayed. Due to the multisystemic and progressive nature of the syndrome, accurate diagnosis is critical for appropriate care and screening for the associated morbidities that may affect the patient's health and well-being.

  14. Four siblings with distal renal tubular acidosis and nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial appearance: a possible new autosomal recessive syndrome.

    PubMed

    Faqeih, Eissa; Al-Akash, Samhar I; Sakati, Nadia; Teebi, Prof Ahmad S

    2007-09-01

    We report on four siblings (three males, one female) born to first cousin Arab parents with the constellation of distal renal tubular acidosis (RTA), small kidneys, nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial features. They presented with early developmental delay with subsequent severe mental, behavioral and social impairment and autistic-like features. Their facial features are unique with prominent cheeks, well-defined philtrum, large bulbous nose, V-shaped upper lip border, full lower lip, open mouth with protruded tongue, and pits on the ear lobule. All had proteinuria, hypercalciuria, hypercalcemia, and normal anion-gap metabolic acidosis. Renal ultrasound examinations revealed small kidneys, with varying degrees of hyperechogenicity and nephrocalcinosis. Additional findings included dilated ventricles and cerebral demyelination on brain imaging studies. Other than distal RTA, common causes of nephrocalcinosis were excluded. The constellation of features in this family currently likely represents a possibly new autosomal recessive syndrome providing further evidence of heterogeneity of nephrocalcinosis syndromes. Copyright 2007 Wiley-Liss, Inc.

  15. Facial reactions to violent and comedy films: Association with callous-unemotional traits and impulsive aggression.

    PubMed

    Fanti, Kostas A; Kyranides, Melina Nicole; Panayiotou, Georgia

    2017-02-01

    The current study adds to prior research by investigating specific (happiness, sadness, surprise, disgust, anger and fear) and general (corrugator and zygomatic muscle activity) facial reactions to violent and comedy films among individuals with varying levels of callous-unemotional (CU) traits and impulsive aggression (IA). Participants at differential risk of CU traits and IA were selected from a sample of 1225 young adults. In Experiment 1, participants (N = 82) facial expressions were recorded while they watched violent and comedy films. Video footage of participants' facial expressions was analysed using FaceReader, a facial coding software that classifies facial reactions. Findings suggested that individuals with elevated CU traits showed reduced facial reactions of sadness and disgust to violent films, indicating low empathic concern in response to victims' distress. In contrast, impulsive aggressors produced specifically more angry facial expressions when viewing violent and comedy films. In Experiment 2 (N = 86), facial reactions were measured by monitoring facial electromyography activity. FaceReader findings were verified by the reduced facial electromyography at the corrugator, but not the zygomatic, muscle in response to violent films shown by individuals high in CU traits. Additional analysis suggested that sympathy to victims explained the association between CU traits and reduced facial reactions to violent films.

  16. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  17. The development of automated behavior analysis software

    NASA Astrophysics Data System (ADS)

    Jaana, Yuki; Prima, Oky Dicky A.; Imabuchi, Takashi; Ito, Hisayoshi; Hosogoe, Kumiko

    2015-03-01

    The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.

  18. The Face of Noonan Syndrome: Does Phenotype Predict Genotype

    PubMed Central

    Allanson, Judith E.; Bohring, Axel; Dorr, Helmuth-Guenther; Dufke, Andreas; Gillessen-Kaesbach, Gabrielle; Horn, Denise; König, Rainer; Kratz, Christian P.; Kutsche, Kerstin; Pauli, Silke; Raskin, Salmo; Rauch, Anita; Turner, Anne; Wieczorek, Dagmar; Zenker, Martin

    2011-01-01

    The facial photographs of 81 individuals with Noonan syndrome, from infancy to adulthood, have been evaluated by two dysmorphologists (JA and MZ), each of whom has considerable experience with disorders of the Ras/MAPK pathway. Thirty-two of this cohort have PTPN11 mutations, 21 SOS1 mutations, 11 RAF1 mutations, and 17 KRAS mutations. The facial appearance of each person was judged to be typical of Noonan syndrome or atypical. In each gene category both typical and unusual faces were found. We determined that some individuals with mutations in the most commonly affected gene, PTPN11, which is correlated with the cardinal physical features, may have a quite atypical face. Conversely, some individuals with KRAS mutations, which may be associated with a less characteristic intellectual phenotype and a resemblance to Costello and cardio-facio-cutaneous syndromes, can have a very typical face. Thus, the facial phenotype, alone, is insufficient to predict the genotype, but certain facial features may facilitate an educated guess in some cases. PMID:20602484

  19. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  20. A Neuromonitoring Approach to Facial Nerve Preservation During Image-guided Robotic Cochlear Implantation.

    PubMed

    Ansó, Juan; Dür, Cilgia; Gavaghan, Kate; Rohrbach, Helene; Gerber, Nicolas; Williamson, Tom; Calvo, Enric M; Balmer, Thomas Wyss; Precht, Christina; Ferrario, Damien; Dettmer, Matthias S; Rösler, Kai M; Caversaccio, Marco D; Bell, Brett; Weber, Stefan

    2016-01-01

    A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1  mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1  mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250  μs. Only at low stimulus intensity values (≤0.3  mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5  mm. However, reduction in stimulus threshold to 0.3  mA or lower resulted in a decrease of facial nerve distance detection range below 0.1  mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1  mm) revealed either mild or inexistent damage to the nerve fascicles. Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.

  1. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    PubMed

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Tryptophan depletion decreases the recognition of fear in female volunteers.

    PubMed

    Harmer, C J; Rogers, R D; Tunbridge, E; Cowen, P J; Goodwin, G M

    2003-06-01

    Serotonergic processes have been implicated in the modulation of fear conditioning in humans, postulated to occur at the level of the amygdala. The processing of other fear-relevant cues, such as facial expressions, has also been associated with amygdala function, but an effect of serotonin depletion on these processes has not been assessed. The present study investigated the effects of reducing serotonin function, using acute tryptophan depletion, on the recognition of basic facial expressions of emotions in healthy male and female volunteers. A double-blind between-groups design was used, with volunteers being randomly allocated to receive an amino acid drink specifically lacking tryptophan or a control mixture containing a balanced mixture of these amino acids. Participants were given a facial expression recognition task 5 h after drink administration. This task featured examples of six basic emotions (fear, anger, disgust, surprise, sadness and happiness) that had been morphed between each full emotion and neutral in 10% steps. As a control, volunteers were given a famous face classification task matched in terms of response selection and difficulty level. Tryptophan depletion significantly impaired the recognition of fearful facial expressions in female, but not male, volunteers. This was specific since recognition of other basic emotions was comparable in the two groups. There was also no effect of tryptophan depletion on the classification of famous faces or on subjective state ratings of mood or anxiety. These results confirm a role for serotonin in the processing of fear related cues, and in line with previous findings also suggest greater effects of tryptophan depletion in female volunteers. Although acute tryptophan depletion does not typically affect mood in healthy subjects, the present results suggest that subtle changes in the processing of emotional material may occur with this manipulation of serotonin function.

  3. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  4. Face processing in autism: Reduced integration of cross-feature dynamics.

    PubMed

    Shah, Punit; Bird, Geoffrey; Cook, Richard

    2016-02-01

    Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults

    PubMed Central

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415

  6. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    PubMed

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  7. Greater perceptual sensitivity to happy facial expression.

    PubMed

    Maher, Stephen; Ekstrom, Tor; Chen, Yue

    2014-01-01

    Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).

  8. Diagnostic relevance of transcranial magnetic and electric stimulation of the facial nerve in the management of facial palsy.

    PubMed

    Nowak, Dennis A; Linder, Stefan; Topka, Helge

    2005-09-01

    Earlier investigations have suggested that isolated conduction block of the facial nerve to transcranial magnetic stimulation early in the disorder represents a very sensitive and potentially specific finding in Bell's palsy differentiating the disease from other etiologies. Stimulation of the facial nerve was performed electrically at the stylomastoid foramen and magnetically at the labyrinthine segment of the Fallopian channel within 3 days from symptom onset in 65 patients with Bell's palsy, five patients with Zoster oticus, one patient with neuroborreliosis and one patient with nuclear facial nerve palsy due to multiple sclerosis. Absence or decreased amplitudes of muscle responses to early transcranial magnetic stimulation was not specific for Bell's palsy, but also evident in all cases of Zoster oticus and in the case of neuroborreliosis. Amplitudes of electrically evoked muscle responses were more markedly reduced in Zoster oticus as compared to Bell's palsy, most likely due to a more severe degree of axonal degeneration. The degree of amplitude reduction of the muscle response to electrical stimulation reliably correlated with the severity of facial palsy. Transcranial magnetic stimulation in the early diagnosis of Bell's palsy is less specific than previously thought. While not specific with respect to the etiology of facial palsy, transcranial magnetic stimulation seems capable of localizing the site of lesion within the Fallopian channel. Combined with transcranial magnetic stimulation, early electrical stimulation of the facial nerve at the stylomastoid foramen may help to establish correct diagnosis and prognosis.

  9. Mastoiditis and facial paralysis as initial manifestations of temporal bone systemic diseases - the significance of the histopathological examination.

    PubMed

    Maniu, Alma Aurelia; Harabagiu, Oana; Damian, Laura Otilia; Ştefănescu, Eugen HoraŢiu; FănuŢă, Bogdan Marius; Cătană, Andreea; Mogoantă, Carmen Aurelia

    2016-01-01

    Several systemic diseases, including granulomatous and infectious processes, tumors, bone disorders, collagen-vascular and other autoimmune diseases may involve the middle ear and temporal bone. These diseases are difficult to diagnose when symptoms mimic acute otomastoiditis. The present report describes our experience with three such cases initially misdiagnosed. Their predominating symptoms were otological with mastoiditis, hearing loss, and subsequently facial nerve palsy. The cases were considered an emergency and the patients underwent tympanomastoidectomy, under the suspicion of otitis media with cholesteatoma, in order to remove a possible abscess and to decompress the facial nerve. The common features were the presence of severe granulation tissue filling the mastoid cavity and middle ear during surgery, without cholesteatoma. The definitive diagnoses was made by means of biopsy of the granulation tissue from the middle ear, revealing granulomatosis with polyangiitis (formerly known as Wegener's granulomatosis) in one case, middle ear tuberculosis and diffuse large B-cell lymphoma respectively. After specific associated therapy facial nerve functions improved, and atypical inflammatory states of the ear resolved. As a group, systemic diseases of the middle ear and temporal bone are uncommon, but aggressive lesions. After analyzing these cases and reviewing the literature, we would like to stress upon the importance of microscopic examination of the affected tissue, required for an accurate diagnosis and effective treatment.

  10. Vertical control in the Class III compensatory treatment.

    PubMed

    Sobral, Márcio Costa; Habib, Fernando A L; Nascimento, Ana Carla de Souza

    2013-01-01

    Compensatory orthodontic treatment, or simply orthodontic camouflage, consists in an important alternative to orthognathic surgery in the resolution of skeletal discrepancies in adult patients. It is important to point that, to be successfully performed, diagnosis must be detailed, to evaluate, specifically, dental and facial features, as well as the limitations imposed by the magnitude of the discrepancy. The main complaint, patient's treatment expectation, periodontal limits, facial pattern and vertical control are some of the items to be explored in the determination of the viability of a compensatory treatment. Hyperdivergent patients who present with a Class III skeletal discrepancy, associated with a vertical facial pattern, with the presence or tendency to anterior open bite, deserve special attention. In these cases, an efficient strategy of vertical control must be planned and executed. The present article aims at illustrating the evolution of efficient alternatives of vertical control in hiperdivergent patients, from the use, in the recent past, of extraoral appliances on the lower dental arch (J-hook), until nowadays, with the advent of skeletal anchorage. But for patients with a more balanced facial pattern, the conventional mechanics with Class III intermaxillary elastics, associated to an accentuated curve of Spee in the upper arch and a reverse curve of Spee in the lower arch, and vertical elastics in the anterior region, continues to be an excellent alternative, if there is extreme collaboration in using the elastics.

  11. Attentional Bias for Emotional Stimuli in Borderline Personality Disorder: A Meta-Analysis.

    PubMed

    Kaiser, Deborah; Jacob, Gitta A; Domes, Gregor; Arntz, Arnoud

    2016-01-01

    In borderline personality disorder (BPD), attentional bias (AB) to emotional stimuli may be a core component in disorder pathogenesis and maintenance. 11 emotional Stroop task (EST) studies with 244 BPD patients, 255 nonpatients (NPs) and 95 clinical controls and 4 visual dot-probe task (VDPT) studies with 151 BPD patients or subjects with BPD features and 62 NPs were included. We conducted two separate meta-analyses for AB in BPD. One meta-analysis focused on the EST for generally negative and BPD-specific/personally relevant negative words. The other meta-analysis concentrated on the VDPT for negative and positive facial stimuli. There is evidence for an AB towards generally negative emotional words compared to NPs (standardized mean difference, SMD = 0.311) and to other psychiatric disorders (SMD = 0.374) in the EST studies. Regarding BPD-specific/personally relevant negative words, BPD patients reveal an even stronger AB than NPs (SMD = 0.454). The VDPT studies indicate a tendency towards an AB to positive facial stimuli but not negative stimuli in BPD patients compared to NPs. The findings rather reflect an AB in BPD to generally negative and BPD-specific/personally relevant negative words rather than an AB in BPD towards facial stimuli, and/or a biased allocation of covert attentional resources to negative emotional stimuli in BPD and not a bias in focus of visual attention. Further research regarding the role of childhood traumatization and comorbid anxiety disorders may improve the understanding of these underlying processes. © 2016 The Author(s) Published by S. Karger AG, Basel.

  12. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  13. Hypereosinophilia and acute bilateral facial palsy: an unusual presentation of a common disease.

    PubMed

    Webb, Alastair John Stewart; Conlon, Chris; Briley, Dennis

    2012-10-01

    A 60-year-old man presented with an acute, pruritic, erythematous rash associated with marked hypereosinophilia (2.34×10(9)/l (0.04-0.40)). There was eosinophilic infiltration on hepatic, bone marrow and lymph node biopsies, with multiple lung nodules and mild splenomegaly. However, extensive investigation excluded parasitic or bacterial causes, specific allergens or the Fip1L1 mutation seen in myeloproliferative hypereosinophilia. Six months into the illness, he developed an acute, left, complete lower motor neurone facial palsy over hours, and an acute right lower motor neurone facial palsy 2 weeks later, without recovery. Over the subsequent 3 months, he developed complex partial seizures, a transient 72-h non-epileptic encephalopathy and episodic vertigo with ataxia. Further investigation showed bilateral enhancement of the VII nerves and labyrinthis on gadolinium-enhanced MR brain scan, cerebrospinal fluid lymphocytosis and neurophysiological evidence of polyradicolopathy. His eosinophil count fell with corticosteroids, hydroxycarbamide, imatinib and ultimately mepolezumab, but without symptomatic improvement. Repeat lymph node biopsy showed Kaposi's sarcoma, leading to a diagnosis of HIV-1 infection with a modestly reduced CD4 count of 413×10(6)/l (430-1690). Hypereosinophila and eosinophilic folliculitis are recognised features of advanced HIV infection, and transient bilateral facial palsy occasionally occurs at the time of seroconversion. This is the first report of a chronic bilateral facial palsy likely due to primary HIV infection, not occurring during seroconversion and in association with hypereosinophilia. This case emphasises the protean manifestations of HIV infection and the need for routine testing in atypical clinical presentations.

  14. Face verification system for Android mobile devices using histogram based features

    NASA Astrophysics Data System (ADS)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  15. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  16. Evolution of middle-late Pleistocene human cranio-facial form: a 3-D approach.

    PubMed

    Harvati, Katerina; Hublin, Jean-Jacques; Gunz, Philipp

    2010-11-01

    The classification and phylogenetic relationships of the middle Pleistocene human fossil record remains one of the most intractable problems in paleoanthropology. Several authors have noted broad resemblances between European and African fossils from this period, suggesting a single taxon ancestral to both modern humans and Neanderthals. Others point out 'incipient' Neanderthal features in the morphology of the European sample and have argued for their inclusion in the Neanderthal lineage exclusively, following a model of accretionary evolution of Neanderthals. We approach these questions using geometric morphometric methods which allow the intuitive visualization and quantification of features previously described qualitatively. We apply these techniques to evaluate proposed cranio-facial 'incipient' facial, vault, and basicranial traits in a middle-late Pleistocene European hominin sample when compared to a sample of the same time depth from Africa. Some of the features examined followed the predictions of the accretion model and relate the middle Pleistocene European material to the later Neanderthals. However, although our analysis showed a clear separation between Neanderthals and early/recent modern humans and morphological proximity between European specimens from OIS 7 to 3, it also shows that the European hominins from the first half of the middle Pleistocene still shared most of their cranio-facial architecture with their African contemporaries. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Coding and quantification of a facial expression for pain in lambs.

    PubMed

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  19. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  20. Cues of fatigue: effects of sleep deprivation on facial appearance.

    PubMed

    Sundelin, Tina; Lekander, Mats; Kecklund, Göran; Van Someren, Eus J W; Olsson, Andreas; Axelsson, John

    2013-09-01

    To investigate the facial cues by which one recognizes that someone is sleep deprived versus not sleep deprived. Experimental laboratory study. Karolinska Institutet, Stockholm, Sweden. Forty observers (20 women, mean age 25 ± 5 y) rated 20 facial photographs with respect to fatigue, 10 facial cues, and sadness. The stimulus material consisted of 10 individuals (five women) photographed at 14:30 after normal sleep and after 31 h of sleep deprivation following a night with 5 h of sleep. Ratings of fatigue, fatigue-related cues, and sadness in facial photographs. The faces of sleep deprived individuals were perceived as having more hanging eyelids, redder eyes, more swollen eyes, darker circles under the eyes, paler skin, more wrinkles/fine lines, and more droopy corners of the mouth (effects ranging from b = +3 ± 1 to b = +15 ± 1 mm on 100-mm visual analog scales, P < 0.01). The ratings of fatigue were related to glazed eyes and to all the cues affected by sleep deprivation (P < 0.01). Ratings of rash/eczema or tense lips were not significantly affected by sleep deprivation, nor associated with judgements of fatigue. In addition, sleep-deprived individuals looked sadder than after normal sleep, and sadness was related to looking fatigued (P < 0.01). The results show that sleep deprivation affects features relating to the eyes, mouth, and skin, and that these features function as cues of sleep loss to other people. Because these facial regions are important in the communication between humans, facial cues of sleep deprivation and fatigue may carry social consequences for the sleep deprived individual in everyday life.

  1. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  2. The facial nerve: anatomy and associated disorders for oral health professionals.

    PubMed

    Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir

    2018-04-01

    The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.

  3. Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping

    NASA Astrophysics Data System (ADS)

    Tsagkrasoulis, Dimosthenis; Hysi, Pirro; Spector, Tim; Montana, Giovanni

    2017-04-01

    The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).

  4. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  6. A Real-Time Interactive System for Facial Makeup of Peking Opera

    NASA Astrophysics Data System (ADS)

    Cai, Feilong; Yu, Jinhui

    In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.

  7. Clinical features and management of facial nerve paralysis in children: analysis of 24 cases.

    PubMed

    Cha, H E; Baek, M K; Yoon, J H; Yoon, B K; Kim, M J; Lee, J H

    2010-04-01

    To evaluate the causes, treatment modalities and recovery rate of paediatric facial nerve paralysis. We analysed 24 cases of paediatric facial nerve paralysis diagnosed in the otolaryngology department of Gachon University Gil Medical Center between January 2001 and June 2006. The most common cause was idiopathic palsy (16 cases, 66.7 per cent). The most common degree of facial nerve paralysis on first presentation was House-Brackmann grade IV (15 of 24 cases). All cases were treated with steroids. One of the 24 cases was also treated surgically with facial nerve decompression. Twenty-two cases (91.6 per cent) recovered to House-Brackmann grade I or II over the six-month follow-up period. Facial nerve paralysis in children can generally be successfully treated with conservative measures. However, in cases associated with trauma, radiological investigation is required for further evaluation and treatment.

  8. Real-time speech-driven animation of expressive talking faces

    NASA Astrophysics Data System (ADS)

    Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli

    2011-05-01

    In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.

  9. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  10. A longitudinal study of facial growth of Southern Chinese in Hong Kong: Comprehensive photogrammetric analyses

    PubMed Central

    Wen, Yi Feng; McGrath, Colman Patrick

    2017-01-01

    Introduction Existing studies on facial growth were mostly cross-sectional in nature and only a limited number of facial measurements were investigated. The purposes of this study were to longitudinally investigate facial growth of Chinese in Hong Kong from 12 through 15 to 18 years of age and to compare the magnitude of growth changes between genders. Methods and findings Standardized frontal and lateral facial photographs were taken from 266 (149 females and 117 males) and 265 (145 females and 120 males) participants, respectively, at all three age levels. Linear and angular measurements, profile inclinations, and proportion indices were recorded. Statistical analyses were performed to investigate growth changes of facial features. Comparisons were made between genders in terms of the magnitude of growth changes from ages 12 to 15, 15 to 18, and 12 to 18 years. For the overall face, all linear measurements increased significantly (p < 0.05) except for height of the lower profile in females (p = 0.069) and width of the face in males (p = 0.648). In both genders, the increase in height of eye fissure was around 10% (p < 0.001). There was significant decrease in nasofrontal angle (p < 0.001) and increase in nasofacial angle (p < 0.001) in both genders and these changes were larger in males. Vermilion-total upper lip height index remained stable in females (p = 0.770) but increased in males (p = 0.020). Nasofrontal angle (effect size: 0.55) and lower vermilion contour index (effect size: 0.59) demonstrated large magnitude of gender difference in the amount of growth changes from 12 to 18 years. Conclusions Growth changes of facial features and gender differences in the magnitude of facial growth were determined. The findings may benefit different clinical specialties and other nonclinical fields where facial growth are of interest. PMID:29053713

  11. Oxytocin attenuates neural reactivity to masked threat cues from the eyes.

    PubMed

    Kanat, Manuela; Heinrichs, Markus; Schwarzwald, Ralf; Domes, Gregor

    2015-01-01

    The neuropeptide oxytocin has recently been shown to modulate covert attention shifts to emotional face cues and to improve discrimination of masked facial emotions. These results suggest that oxytocin modulates facial emotion processing at early perceptual stages prior to full evaluation of the emotional expression. Here, we used functional magnetic resonance imaging to examine whether oxytocin alters neural responses to backwardly masked angry and happy faces while controlling for attention to the eye vs the mouth region. Intranasal oxytocin administration reduced amygdala reactivity to masked emotions when attending to salient facial features, ie, the eyes of angry faces and the mouth of happy faces. In addition, oxytocin decreased neural responses within the fusiform gyrus and brain stem areas, as well as functional coupling between the amygdala and the fusiform gyrus specifically for threat cues from the eyes. Effects of oxytocin on brain activity were not attributable to differences in behavioral performance, as oxytocin had no impact on mere emotion detection. Our results suggest that oxytocin attenuates neural correlates of early arousal by threat signals from the eye region. As reduced threat sensitivity may increase the likelihood of engaging in social interactions, our findings may have important implications for clinical states of social anxiety.

  12. Uncovering gender discrimination cues in a realistic setting.

    PubMed

    Dupuis-Roy, Nicolas; Fortin, Isabelle; Fiset, Daniel; Gosselin, Frédéric

    2009-02-10

    Which face cues do we use for gender discrimination? Few studies have tried to answer this question and the few that have tried typically used only a small set of grayscale stimuli, often distorted and presented a large number of times. Here, we reassessed the importance of facial cues for gender discrimination in a more realistic setting. We applied Bubbles-a technique that minimizes bias toward specific facial features and does not necessitate the distortion of stimuli-to a set of 300 color photographs of Caucasian faces, each presented only once to 30 participants. Results show that the region of the eyes and the eyebrows-probably in the light-dark channel-is the most important facial cue for accurate gender discrimination; and that the mouth region is driving fast correct responses (but not fast incorrect responses)-the gender discrimination information in the mouth region is concentrated in the red-green color channel. Together, these results suggest that, when color is informative in the mouth region, humans use it and respond rapidly; and, when it's not informative, they have to rely on the more robust but more sluggish luminance information in the eye-eyebrow region.

  13. Facial paralysis for the plastic surgeon.

    PubMed

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory Rd; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis.The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain.With respect to facial paralysis, surgeons tend to focus on the surgical, or 'hands-on', aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper.

  14. Facial palsy after dental procedures - Is viral reactivation responsible?

    PubMed

    Gaudin, Robert A; Remenschneider, Aaron K; Phillips, Katie; Knipfer, Christian; Smeets, Ralf; Heiland, Max; Hadlock, Tessa A

    2017-01-01

    Herpes labialis viral reactivation has been reported following dental procedures, but the incidence, characteristics and outcomes of delayed peripheral facial nerve palsy following dental work is poorly understood. Herein we describe the unique features of delayed facial paresis following dental procedures. An institutional retrospective review was performed to identify patients diagnosed with delayed facial nerve palsy within 30 days of dental manipulation. Demographics, prodromal signs and symptoms, initial medical treatment and outcomes were assessed. Of 2471 patients with facial palsy, 16 (0.7%) had delayed facial paresis following ipsilateral dental procedures. Average age at presentation was 44 yrs and 56% (9/16) were female. Clinical evaluation was consistent with Bell's palsy in 14 (88%) and Ramsay-Hunt syndrome in 2 patients (12%). Patients developed facial paresis an average of 3.9 days after the dental procedure, with all individuals developing a flaccid paralysis (House Brackmann (HB) grade VI) during the acute stage. 50% of patients developed persistent facial palsy in the form of non-flaccid facial paralysis (HBIII-IV). Facial palsy, like herpes labialis, can occur in the days following dental procedures and may also be related to viral reactivation. In this small cohort, long-term facial outcomes appear worse than for spontaneous Bell's palsy. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  15. Facial paralysis for the plastic surgeon

    PubMed Central

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory RD; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis. The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain. With respect to facial paralysis, surgeons tend to focus on the surgical, or ‘hands-on’, aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper. PMID:19554190

  16. Facial color is an efficient mechanism to visually transmit emotion

    PubMed Central

    Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash

    2018-01-01

    Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780

  17. Facial color is an efficient mechanism to visually transmit emotion.

    PubMed

    Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M

    2018-04-03

    Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.

  18. Cancelable biometrics realization with multispace random projections.

    PubMed

    Teoh, Andrew Beng Jin; Yuang, Chong Tze

    2007-10-01

    Biometric characteristics cannot be changed; therefore, the loss of privacy is permanent if they are ever compromised. This paper presents a two-factor cancelable formulation, where the biometric data are distorted in a revocable but non-reversible manner by first transforming the raw biometric data into a fixed-length feature vector and then projecting the feature vector onto a sequence of random subspaces that were derived from a user-specific pseudorandom number (PRN). This process is revocable and makes replacing biometrics as easy as replacing PRNs. The formulation has been verified under a number of scenarios (normal, stolen PRN, and compromised biometrics scenarios) using 2400 Facial Recognition Technology face images. The diversity property is also examined.

  19. Williams-Beuren syndrome associated with single kidney and nephrocalcinosis: a case report.

    PubMed

    Abidi, Kamel; Jellouli, Manel; Ben Rabeh, Rania; Hammi, Yousra; Gargah, Tahar

    2015-01-01

    Williams-Beuren syndrome is a rare neurodevelopmental disorder, characterized by congenital heart defects, abnormal facial features, mental retardation with specific cognitive and behavioral profile, growth hormone deficiency, renal and skeletal anomalies, inguinal hernia, infantile hypercalcaemia. We report a case with Williams-Beuren syndrome associated with a single kidney and nephrocalcinosis complicated by hypercalcaemia. A male infant, aged 20 months presented growth retardation associated with a psychomotor impairment, dysmorphic features and nephrocalcinosis. He had also hypercalciuria and hypercalcemia. Echocardiography was normal. DMSA renal scintigraphy showed a single functioning kidney. The FISH generated one ELN signal in 20 metaphases read and found the presence of ELN deletion, with compatible Williams-Beuren syndrome.

  20. When false recognition is out of control: the case of facial conjunctions.

    PubMed

    Jones, Todd C; Bartlett, James C

    2009-03-01

    In three experiments, a dual-process approach to face recognition memory is examined, with a specific focus on the idea that a recollection process can be used to retrieve configural information of a studied face. Subjects could avoid, with confidence, a recognition error to conjunction lure faces (each a reconfiguration of features from separate studied faces) or feature lure faces (each based on a set of old features and a set of new features) by recalling a studied configuration. In Experiment 1, study repetition (one vs. eight presentations) was manipulated, and in Experiments 2 and 3, retention interval over a short number of trials (0-20) was manipulated. Different measures converged on the conclusion that subjects were unable to use a recollection process to retrieve configural information in an effort to temper recognition errors for conjunction or feature lure faces. A single process, familiarity, appears to be the sole process underlying recognition of conjunction and feature faces, and familiarity contributes, perhaps in whole, to discrimination of old from conjunction faces.

  1. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  2. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  3. Occlusal and facial features in Amazon indigenous: An insight into the role of genetics and environment in the etiology dental malocclusion.

    PubMed

    de Souza, Bento Sousa; Bichara, Livia Monteiro; Guerreiro, João Farias; Quintão, Cátia Cardoso Abdo; Normando, David

    2015-09-01

    Indigenous people of the Xingu river present a similar tooth wear pattern, practise exclusive breast-feeding, no pacifier use, and have a large intertribal genetic distance. To revisit the etiology of dental malocclusion features considering these population characteristics. Occlusion and facial features of five semi-isolated Amazon indigenous populations (n=351) were evaluated and compared to previously published data from urban Amazon people. Malocclusion prevalence ranged from 33.8% to 66.7%. Overall this prevalence is lower when compared to urban people mainly regarding posterior crossbite. A high intertribal diversity was found. The Arara-Laranjal village had a population with a normal face profile (98%) and a high rate of normal occlusion (66.2%), while another group from the same ethnicity presented a high prevalence of malocclusion, the highest occurrence of Class III malocclusion (32.6%) and long face (34.8%). In Pat-Krô village the population had the highest prevalence of Class II malocclusion (43.9%), convex profile (38.6%), increased overjet (36.8%) and deep bite (15.8%). Another village's population, from the same ethnicity, had a high frequency of anterior open bite (22.6%) and anterior crossbite (12.9%). The highest occurrence of bi-protrusion was found in the group with the lowest prevalence of dental crowding, and vice versa. Supported by previous genetic studies and given their similar environmental conditions, the high intertribal diversity of occlusal and facial features suggests that genetic factors contribute substantially to the morphology of occlusal and facial features in the indigenous groups studied. The low prevalence of posterior crossbite in the remote indigenous populations compared with urban populations may relate to prolonged breastfeeding and an absence of pacifiers in the indigenous groups. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Case Report: Congenital Erythroleukemia in a Premature Infant with Dysmorphic Features.

    PubMed

    Helin, Heidi; van der Walt, Jon; Holder, Muriel; George, Simi

    2016-01-01

    We present a case of pure erythroleukemia, diagnosed at autopsy, in a dysmorphic premature infant who died of multiorgan failure within 24 hours of birth. Dysmorphic features included facial and limb abnormalities with long philtrum, microagnathia, downturned mouth, short neck as well as abnormal and missing nails, missing distal phalanx from the second toe, and overlapping toes. Internal findings included gross hepatomegaly and patchy hemorrhages in the liver, splenomegaly, and cardiomegaly; and subdural, intracerebral, and intraventricular hemorrhages. Histology revealed infiltration of bone marrow, kidney, heart, liver, adrenal, lung, spleen, pancreas, thyroid, testis, thymus, and placenta by pure erythroleukemia. Only 6 cases of congenital erythroleukemia have been previously reported with autopsy findings similar to those of this case. The dysmorphic features, although not fitting any specific syndrome, make this case unique. Congenital erythroleukemia and possible syndromes suggested by the dysmorphic features are discussed.

  5. The Morphometrics of “Masculinity” in Human Faces

    PubMed Central

    Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B.; Schaefer, Katrin

    2015-01-01

    In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features—the masculinity shape scores—were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity. PMID:25671667

  6. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  7. Periapical Cemento-osseous Dysplasia Is Rarely Diagnosed on Orthopantomograms of Patients with Neurofibromatosis Type 1 and Is Not a Gender-specific Feature of the Disease.

    PubMed

    Friedrich, Reinhard E; Reul, Anika

    2018-04-01

    Several skeletal aberrations of the skull have been described for the tumor predisposition syndrome neurofibromatosis type 1 (NF1). Recently, periapical cemental/cemento-osseous dysplasia (COD) has been described in females affected with NF1. This reactive lesion of the hard tissues in tooth-bearing areas of the jaw has been proposed to represent a gender-specific radiological feature of NF1. The aim of this study was to investigate the prevalence of COD in patients with NF1. The orthopantomograms (OPGs) of 179 patients with a confirmed diagnosis of NF1 were analyzed for COD. The results were compared to radiographic findings obtained in OPGs of age- and sex-matched controls. The NF1 patient group was further differentiated according to the evidence of facial plexiform neurofibroma. COD was a very rare finding in both groups. The extension of the diagnostic criteria including radiologically-healthy teeth and a widened periodontal gap in the periapical area only marginally increased the number of considered cases. Although there was a somewhat more common occurrence of such changes in the patient group compared to the control group and the number of affected women was greater than the number of men, none of these differences reached statistical significance. Furthermore, COD or widening of the periradicular periodontal space was not found to be associated with facial tumor type in NF1. The investigation revealed that COD is not a diagnostic feature of NF1. There is no clear association of the rare finding of COD with gender. These studies should be compared with patient groups of other ethnic backgrounds. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  8. Cross-Cultural Agreement in Facial Attractiveness Preferences: The Role of Ethnicity and Gender

    PubMed Central

    Coetzee, Vinet; Greeff, Jaco M.; Stephen, Ian D.; Perrett, David I.

    2014-01-01

    Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences. PMID:24988325

  9. Functional connectivity between amygdala and facial regions involved in recognition of facial threat

    PubMed Central

    Harada, Tokiko; Ruffman, Ted; Sadato, Norihiro; Iidaka, Tetsuya

    2013-01-01

    The recognition of threatening faces is important for making social judgments. For example, threatening facial features of defendants could affect the decisions of jurors during a trial. Previous neuroimaging studies using faces of members of the general public have identified a pivotal role of the amygdala in perceiving threat. This functional magnetic resonance imaging study used face photographs of male prisoners who had been convicted of first-degree murder (MUR) as threatening facial stimuli. We compared the subjective ratings of MUR faces with those of control (CON) faces and examined how they were related to brain activation, particularly, the modulation of the functional connectivity between the amygdala and other brain regions. The MUR faces were perceived to be more threatening than the CON faces. The bilateral amygdala was shown to respond to both MUR and CON faces, but subtraction analysis revealed no significant difference between the two. Functional connectivity analysis indicated that the extent of connectivity between the left amygdala and the face-related regions (i.e. the superior temporal sulcus, inferior temporal gyrus and fusiform gyrus) was correlated with the subjective threat rating for the faces. We have demonstrated that the functional connectivity is modulated by vigilance for threatening facial features. PMID:22156740

  10. Cross-cultural agreement in facial attractiveness preferences: the role of ethnicity and gender.

    PubMed

    Coetzee, Vinet; Greeff, Jaco M; Stephen, Ian D; Perrett, David I

    2014-01-01

    Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences.

  11. Pilot study to establish a nasal tip prediction method from unknown human skeletal remains for facial reconstruction and skull photo superimposition as applied to a Japanese male populations.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Kibayashi, Kazuhiko; Sakurada, Koichi; Uemura, Koichi

    2016-02-01

    Skull-photo superimposition is a technique used to identify the relationship between the skull and a photograph of a target person: and facial reconstruction reproduces antemortem facial features from an unknown human skull, or identifies the facial features of unknown human skeletal remains. These techniques are based on soft tissue thickness and the relationships between soft tissue and the skull, i.e., the position of the ear and external acoustic meatus, pupil and orbit, nose and nasal aperture, and lips and teeth. However, the ear and nose region are relatively difficult to identify because of their structure, as the soft tissues of these regions are lined with cartilage. We attempted to establish a more accurate method to determine the position of the nasal tip from the skull. We measured the height of the maxilla and mid-lower facial region in 55 Japanese men and generated a regression equation from the collected data. We obtained a result that was 2.0±0.99mm (mean±SD) distant from the true nasal tip, when applied to a validation set consisting of another 12 Japanese men. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  12. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  13. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  14. De novo pathogenic variants in CHAMP1 are associated with global developmental delay, intellectual disability, and dysmorphic facial features.

    PubMed

    Tanaka, Akemi J; Cho, Megan T; Retterer, Kyle; Jones, Julie R; Nowak, Catherine; Douglas, Jessica; Jiang, Yong-Hui; McConkie-Rosell, Allyn; Schaefer, G Bradley; Kaylor, Julie; Rahman, Omar A; Telegrafi, Aida; Friedman, Bethany; Douglas, Ganka; Monaghan, Kristin G; Chung, Wendy K

    2016-01-01

    We identified five unrelated individuals with significant global developmental delay and intellectual disability (ID), dysmorphic facial features and frequent microcephaly, and de novo predicted loss-of-function variants in chromosome alignment maintaining phosphoprotein 1 (CHAMP1). Our findings are consistent with recently reported de novo mutations in CHAMP1 in five other individuals with similar features. CHAMP1 is a zinc finger protein involved in kinetochore-microtubule attachment and is required for regulating the proper alignment of chromosomes during metaphase in mitosis. Mutations in CHAMP1 may affect cell division and hence brain development and function, resulting in developmental delay and ID.

  15. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    PubMed

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  16. Relaxed Open Mouth reciprocity favours playful contacts in South American sea lions (Otaria flavescens).

    PubMed

    Llamazares-Martín, Clara; Scopa, Chiara; Guillén-Salazar, Federico; Palagi, Elisabetta

    2017-07-01

    Fine-tuning communication is well documented in mammalian social play which relies on a large variety of specific and non-specific signals. Facial expressions are one of the most frequent patterns in play communication. The reciprocity of facial signals expressed by the players provides information on their reciprocal attentional state and on the correct perception/decoding of the signal itself. Here, for the first time, we explored the Relaxed Open Mouth (ROM), a widespread playful facial expression among mammals, in the South American sea lion (Otaria flavescens). In this species, like many others, ROM appears to be used as a playful signal as distinct from merely being a biting action. ROM was often reciprocated by players. Even though ROM did not vary in frequency of emission as a function of the number of players involved, it was reciprocated more often during dyadic encounters, in which the players had the highest probability to engage in a face-to-face interaction. Finally, we found that it was the reciprocation of ROMs, more than their frequency performance, that was effective in prolonging playful bouts. In conclusion, ROM is widespread in many social mammals and O. flavescens is not an exception. At least in those species for which quantitative data are available, ROM seems to be characterized by similar design features clearly indicating that the signal underwent to similar selective pressures. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Mutation of Gtf2ird1 from the Williams-Beuren syndrome critical region results in facial dysplasia, motor dysfunction, and altered vocalisations.

    PubMed

    Howard, Monique L; Palmer, Stephen J; Taylor, Kylie M; Arthurson, Geoffrey J; Spitzer, Matthew W; Du, Xin; Pang, Terence Y C; Renoir, Thibault; Hardeman, Edna C; Hannan, Anthony J

    2012-03-01

    Insufficiency of the transcriptional regulator GTF2IRD1 has become a strong potential explanation for some of the major characteristic features of the neurodevelopmental disorder Williams-Beuren syndrome (WBS). Genotype/phenotype correlations in humans indicate that the hemizygous loss of the GTF2IRD1 gene and an adjacent paralogue, GTF2I, play crucial roles in the neurocognitive and craniofacial aspects of the disease. In order to explore this genetic relationship in greater detail, we have generated a targeted Gtf2ird1 mutation in mice that blocks normal GTF2IRD1 protein production. Detailed analyses of homozygous null Gtf2ird1 mice have revealed a series of phenotypes that share some intriguing parallels with WBS. These include reduced body weight, a facial deformity resulting from localised epidermal hyperplasia, a motor coordination deficit, alterations in exploratory activity and, in response to specific stress-inducing stimuli; a novel audible vocalisation and increased serum corticosterone. Analysis of Gtf2ird1 expression patterns in the brain using a knock-in LacZ reporter and c-fos activity mapping illustrates the regions where these neurological abnormalities may originate. These data provide new mechanistic insight into the clinical genetic findings in WBS patients and indicate that insufficiency of GTF2IRD1 protein contributes to abnormalities of facial development, motor function and specific behavioural disorders that accompany this disease. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Evaluating faces on trustworthiness: an extension of systems for recognition of emotions signaling approach/avoidance behaviors.

    PubMed

    Todorov, Alexander

    2008-03-01

    People routinely make various trait judgments from facial appearance, and such judgments affect important social outcomes. These judgments are highly correlated with each other, reflecting the fact that valence evaluation permeates trait judgments from faces. Trustworthiness judgments best approximate this evaluation, consistent with evidence about the involvement of the amygdala in the implicit evaluation of face trustworthiness. Based on computer modeling and behavioral experiments, I argue that face evaluation is an extension of functionally adaptive systems for understanding the communicative meaning of emotional expressions. Specifically, in the absence of diagnostic emotional cues, trustworthiness judgments are an attempt to infer behavioral intentions signaling approach/avoidance behaviors. Correspondingly, these judgments are derived from facial features that resemble emotional expressions signaling such behaviors: happiness and anger for the positive and negative ends of the trustworthiness continuum, respectively. The emotion overgeneralization hypothesis can explain highly efficient but not necessarily accurate trait judgments from faces, a pattern that appears puzzling from an evolutionary point of view and also generates novel predictions about brain responses to faces. Specifically, this hypothesis predicts a nonlinear response in the amygdala to face trustworthiness, confirmed in functional magnetic resonance imaging (fMRI) studies, and dissociations between processing of facial identity and face evaluation, confirmed in studies with developmental prosopagnosics. I conclude with some methodological implications for the study of face evaluation, focusing on the advantages of formally modeling representation of faces on social dimensions.

  19. Acquisition Level Definitions and Observables for Human Targets, Urban Operations, and the Global War on Terrorism

    DTIC Science & Technology

    2005-04-08

    category, feature identification, has been added to address such worn or carried objects, and facial recognition . The definitions also address commercial...Cell phone or revolver − Uniform worn by French or US or Chinese infantry − Facial recognition /identification (A particular person can be

  20. The facial skeleton of the chimpanzee-human last common ancestor

    PubMed Central

    Cobb, Samuel N

    2008-01-01

    This review uses the current morphological evidence to evaluate the facial morphology of the hypothetical last common ancestor (LCA) of the chimpanzee/bonobo (panin) and human (hominin) lineages. Some of the problems involved in reconstructing ancestral morphologies so close to the formation of a lineage are discussed. These include the prevalence of homoplasy and poor phylogenetic resolution due to a lack of defining derived features. Consequently the list of hypothetical features expected in the face of the LCA is very limited beyond its hypothesized similarity to extant Pan. It is not possible to determine with any confidence whether the facial morphology of any of the current candidate LCA taxa (Ardipithecus kadabba, Ardipithecus ramidus, Orrorin tugenensis and Sahelanthropus tchadensis) is representative of the LCA, or a stem hominin, or a stem panin or, in some cases, a hominid predating the emergence of the hominin lineage. The major evolutionary trends in the hominin lineage subsequent to the LCA are discussed in relation to the dental arcade and dentition, subnasal morphology and the size, position and prognathism of the facial skeleton. PMID:18380866

  1. Gaze control during face exploration in schizophrenia.

    PubMed

    Delerue, Céline; Laprévote, Vincent; Verfaillie, Karl; Boucart, Muriel

    2010-10-04

    Patients with schizophrenia perform worse than controls on various face perception tasks. Studies monitoring eye movements have shown reduced scan paths and a lower number of fixations to relevant facial features (eyes, nose, mouth) than to other parts. We examine whether attentional control, through instructions, modulates visual scanning in schizophrenia. Visual scan paths were monitored in 20 patients with schizophrenia and 20 controls. Participants started with a "free viewing" task followed by tasks in which they were asked to determine the gender, identify the facial expression, estimate the age, or decide whether the face was known or unknown. Temporal and spatial characteristics of scan paths were compared for each group and task. Consistent with the literature, patients with schizophrenia showed reduced attention to salient facial features in the passive viewing. However, their scan paths did not differ from that of controls when asked to determine the facial expression, the gender, the age or the familiarity of the face. The results are interpreted in terms of attentional control and cognitive flexibility. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  3. Facial anthropometric differences among gender, ethnicity, and age groups.

    PubMed

    Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald

    2010-06-01

    The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational groups have facial features that differ significantly than those in construction. Statistically significant differences in facial anthropometric dimensions (P < 0.05) were noted between males and females, all racial/ethnic groups, and the subjects who were at least 45 years old when compared to workers between 18 and 29 years of age. These findings could be important to the design and manufacture of respirators, as well as employers responsible for supplying respiratory protective equipment to their employees.

  4. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  5. Current research on pycnodysostosis.

    PubMed

    Turan, Serap

    2014-08-01

    Pycnodysostosis is a rare autosomal recessive disorder caused by an inactivating mutation in cathepsin K (CTSK) and characterized by dysmorphic facial features, a short stature, acroosteolysis, osteosclerosis with increased bone fragility, and delayed closure of cranial sutures. Patients usually present with short stature or dysmorphic features the Pediatric Endocrinology or Genetics clinics, with atypical fractures to the orthopedics clinics or hematological abnormalities to the hematology clinics. However, under-diagnosis or misdiagnosis of this condition is a major issue. Pycnodysostosis is not a life threatening condition, but craniosynostosis, frequent fractures, respiratory-sleep problems, and dental problems may cause significant morbidity. Although no specific treatment for this disorder has been described, patients should be followed for complications and treated accordingly. A specific treatment for the disorder must be established in the future to prevent complications and improve quality of life for patients in the current era of advanced molecular research.

  6. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    PubMed

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  8. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  9. Hemifacial microsomia in cat-eye syndrome: 22q11.1-q11.21 as candidate loci for facial symmetry.

    PubMed

    Quintero-Rivera, Fabiola; Martinez-Agosto, Julian A

    2013-08-01

    Cat-Eye syndrome (CES), (OMIM 115470) also known as chromosome 22 partial tetrasomy or inverted duplicated 22q11, was first reported by Haab [1879] based on the primary features of eye coloboma and anal atresia. However, >60% of the patients lack these primary features. Here, we present a 9-month-old female who at birth was noted to have multiple defects, including facial asymmetry with asymmetric retrognathia, bilateral mandibular hypoplasia, branchial cleft sinus, right-sided muscular torticollis, esotropia, and an atretic right ear canal with low-to-moderate sensorineural hearing loss, bilateral preauricular ear tag/pits, and two skin tags on her left cheek. There were no signs of any colobomas or anal atresia. Hemifacial microsomia (HFM) was suspected clinically. Chromosome studies and FISH identified an extra marker originated from 22q11 consistent with CES, and this was confirmed by aCGH. This report expands the phenotypic variability of CES and includes partial tetrasomy of 22q11.1-q11.21 in the differential diagnosis of HFM. In addition, our case as well as the previous association of 22q11.2 deletions and duplications with facial asymmetry and features of HFM, supports the hypothesis that this chromosome region harbors genes important in the regulation of body plan symmetry, and in particular facial harmony. Copyright © 2013 Wiley Periodicals, Inc.

  10. A prospective study of risk for Sturge-Weber syndrome in children with upper facial port-wine stain.

    PubMed

    Dutkiewicz, Anne-Sophie; Ezzedine, Khaled; Mazereeuw-Hautier, Juliette; Lacour, Jean-Philippe; Barbarot, Sébastien; Vabres, Pierre; Miquel, Juliette; Balguerie, Xavier; Martin, Ludovic; Boralevi, Franck; Bessou, Pierre; Chateil, Jean-François; Léauté-Labrèze, Christine

    2015-03-01

    Upper facial port-wine stain (PWS) is a feature of Sturge-Weber syndrome (SWS). Recent studies suggest that the distribution of the PWS corresponds to genetic mosaicism rather than to trigeminal nerve impairment. We sought to refine the cutaneous distribution of upper facial PWS at risk for SWS. This was a prospective multicenter study of consecutive cases of upper facial PWS larger than 1 cm² located in the ophthalmic division of trigeminal nerve distribution in infants aged less than 1 year, seen in 8 French pediatric dermatology departments between 2006 and 2012. Clinical data, magnetic resonance imaging, and photographs were systematically collected and studied. PWS were classified into 6 distinct patterns. In all, 66 patients were included. Eleven presented with SWS (magnetic resonance imaging signs and seizure). Four additional infants had suspected SWS without neurologic manifestations. Hemifacial (odds ratio 7.7, P = .003) and median (odds ratio 17.08, P = .008) PWS patterns were found to be at high risk for SWS. A nonmedian linear pattern was not associated with SWS. Small number of patients translated to limited power of the study. Specific PWS distribution patterns are associated with an increased risk of SWS. These PWS patterns conform to areas of somatic mosaicism. Terminology stipulating ophthalmic division of trigeminal nerve territory involvement in SWS should be abandoned. Copyright © 2014 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  11. Face aging effect simulation model based on multilayer representation and shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Yuancheng; Li, Yan

    2017-09-01

    In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.

  12. Late revision or correction of facial trauma-related soft-tissue deformities.

    PubMed

    Rieck, Kevin L; Fillmore, W Jonathan; Ettinger, Kyle S

    2013-11-01

    Surgical approaches used in accessing the facial skeleton for fracture repair are often the same as or similar to those used for cosmetic enhancement of the face. Rarely does facial trauma result in injuries that do not in some way affect the facial soft-tissue envelope either directly or as sequelae of the surgical repair. Knowledge of both skeletal and facial soft-tissue anatomy is paramount to successful clinical outcomes. Facial soft-tissue deformities can arise that require specific evaluation and management for correction. This article focuses on revision and correction of these soft-tissue-related injuries secondary to facial trauma. Copyright © 2013. Published by Elsevier Inc.

  13. Validation of the facial dysfunction domain of the Penn Acoustic Neuroma Quality-of-Life (PANQOL) Scale.

    PubMed

    Lodder, Wouter L; Adan, Guleed H; Chean, Chung S; Lesser, Tristram H; Leong, Samuel C

    2017-06-01

    The objective of this study is to evaluate the strength of content validity within the facial dysfunction domain of the Penn Acoustic Neuroma Quality-of-Life (PANQOL) Scale and to compare how it correlates with a facial dysfunction-specific QOL instrument (Facial Clinimetric Evaluation, FaCE). The study design is online questionnaire survey. Members of the British Acoustic Neuroma Association received both PANQOL questionnaires and the FaCE scale. 158 respondents with self-identified facial paralysis or dysfunction had completed PANQOL and FaCE data sets for analysis. The mean composite PANQOL score was 53.5 (range 19.2-93.5), whilst the mean total FaCE score was 50.9 (range 10-95). The total scores of the PANQOL and FaCE correlated moderate (r = 0.48). Strong correlation (r = 0.63) was observed between the PANQOL's facial dysfunction domain and the FaCE total score. Of all the FaCE domains, social function was strongly correlated with the PANQOL facial dysfunction domain (r = 0.66), whilst there was very weak-to-moderate correlation (range 0.01-0.43) to the other FaCE domains. The current study has demonstrated a strong correlation between the facial dysfunction domains of PANQOL with a facial paralysis-specific QOL instrument.

  14. Heritabilities of Facial Measurements and Their Latent Factors in Korean Families

    PubMed Central

    Kim, Hyun-Jin; Im, Sun-Wha; Jargal, Ganchimeg; Lee, Siwoo; Yi, Jae-Hyuk; Park, Jeong-Yeon; Sung, Joohon; Cho, Sung-Il; Kim, Jong-Yeol; Kim, Jong-Il; Seo, Jeong-Sun

    2013-01-01

    Genetic studies on facial morphology targeting healthy populations are fundamental in understanding the specific genetic influences involved; yet, most studies to date, if not all, have been focused on congenital diseases accompanied by facial anomalies. To study the specific genetic cues determining facial morphology, we estimated familial correlations and heritabilities of 14 facial measurements and 3 latent factors inferred from a factor analysis in a subset of the Korean population. The study included a total of 229 individuals from 38 families. We evaluated a total of 14 facial measurements using 2D digital photographs. We performed factor analysis to infer common latent variables. The heritabilities of 13 facial measurements were statistically significant (p < 0.05) and ranged from 0.25 to 0.61. Of these, the heritability of intercanthal width in the orbital region was found to be the highest (h2 = 0.61, SE = 0.14). Three factors (lower face portion, orbital region, and vertical length) were obtained through factor analysis, where the heritability values ranged from 0.45 to 0.55. The heritability values for each factor were higher than the mean heritability value of individual original measurements. We have confirmed the genetic influence on facial anthropometric traits and suggest a potential way to categorize and analyze the facial portions into different groups. PMID:23843774

  15. When is facial paralysis Bell palsy? Current diagnosis and treatment.

    PubMed

    Ahmed, Anwar

    2005-05-01

    Bell palsy is largely a diagnosis of exclusion, but certain features in the history and physical examination help distinguish it from facial paralysis due to other conditions: eg, abrupt onset with complete, unilateral facial weakness at 24 to 72 hours, and, on the affected side, numbness or pain around the ear, a reduction in taste, and hypersensitivity to sounds. Corticosteroids and antivirals given within 10 days of onset have been shown to help. But Bell palsy resolves spontaneously without treatment in most patients within 6 months.

  16. Metric and morphological assessment of facial features: a study on three European populations.

    PubMed

    Ritz-Timme, S; Gabriel, P; Tutkuviene, J; Poppa, P; Obertová, Z; Gibelli, D; De Angelis, D; Ratnayake, M; Rizgeliene, R; Barkus, A; Cattaneo, C

    2011-04-15

    Identification from video surveillance systems is becoming more and more frequent in the forensic practice. In this field, different techniques have been improved such as height estimation and gait analysis. However, the most natural approach for identifying a person in everyday life is based on facial characteristics. Scientifically, faces can be described using morphological and metric assessment of facial features. The morphological approach is largely affected by the subjective opinion of the observer, which can be mitigated by the application of descriptive atlases. In addition, this approach requires one to investigate which are the most common and rare facial characteristics in different populations. For the metric approach further studies are necessary in order to point out possible metric differences within and between different populations. The acquisition of statistically adequate population data may provide useful information for the reconstruction of biological profiles of unidentified individuals, particularly concerning ethnic affiliation, and possibly also for personal identification. This study presents the results of the morphological and metric assessment of the head and face of 900 male subjects between 20 and 31 years from Italy, Germany and Lithuania. The evaluation of the morphological traits was performed using the DMV atlas with 43 pre-defined facial characteristics. The frequencies of the types of facial features were calculated for each population in order to establish the rarest characteristics which may be used for the purpose of a biological profile and consequently for personal identification. Metric analysis performed in vivo included 24 absolute measurements and 24 indices of the head and face, including body height and body weight. The comparison of the frequencies of morphological facial features showed many similarities between the samples from Germany, Italy and Lithuania. However, several characteristics were rare or significantly more or less common in one population compared to the other two. On the other hand, all measurements and indices, except for labial width and intercanthal-mouth index showed significant differences between the three populations. As far as comparisons with other samples are concerned, the three European Caucasian samples differed from North American Caucasian, African and Asian groups as concerns the frequency of the morphological traits and the mean values of the metric analysis. The metric and morphological data collected from three European populations may be useful for forensic purposes in the construction of biological profiles and in screening for personal identification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. Intellectual Abilities in a Large Sample of Children with Velo-Cardio-Facial Syndrome: An Update

    ERIC Educational Resources Information Center

    De Smedt, Bert; Devriendt, K.; Fryns, J. -P.; Vogels, A.; Gewillig, M.; Swillen, A.

    2007-01-01

    Background: Learning disabilities are one of the most consistently reported features in Velo-Cardio-Facial Syndrome (VCFS). Earlier reports on IQ in children with VCFS were, however, limited by small sample sizes and ascertainment biases. The aim of the present study was therefore to replicate these earlier findings and to investigate intellectual…

  18. Facial soft tissue thickness in skeletal type I Japanese children.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Deguchi, Toshio; Umemura, Yasunobu; Yoshino, Mineo; Nakamura, Hiroshi; Miyazawa, Hiroo; Inoue, Katsuhiro

    2007-10-25

    Facial reconstruction techniques used in forensic anthropology require knowledge of the facial soft tissue thickness of each race if facial features are to be reconstructed correctly. If this is inaccurate, so also will be the reconstructed face. Knowledge of differences by age and sex are also required. Therefore, when unknown human skeletal remains are found, the forensic anthropologist investigates for race, sex, and age, and for other variables of relevance. Cephalometric X-ray images of living persons can help to provide this information. They give an approximately 10% enlargement from true size and can demonstrate the relationship between soft and hard tissue. In the present study, facial soft tissue thickness in Japanese children was measured at 12 anthropological points using X-ray cephalometry in order to establish a database for facial soft tissue thickness. This study of both boys and girls, aged from 6 to 18 years, follows a previous study of Japanese female children only, and focuses on facial soft tissue thickness in only one skeletal type. Sex differences in thickness of tissue were found from 12 years of age upwards. The study provides more detailed and accurate measurements than past reports of facial soft tissue thickness, and reveals the uniqueness of the Japanese child's facial profile.

  19. Association between ratings of facial attractivess and patients' motivation for orthognathic surgery.

    PubMed

    Vargo, J K; Gladwin, M; Ngan, P

    2003-02-01

    To compare the judgments of facial esthetics, defects and treatment needs between laypersons and professionals (orthodontists and oral surgeons) as predictors of patient's motivation for orthognathic surgery. Two panels of expert and naïve raters were asked to evaluate photographs of orthognathic surgery patients for facial esthetics, defects and treatment needs. Results were correlated with patients' motivation for surgery. Fifty-seven patients (37 females and 20 males) with a mean age of 26.0 +/- 6.7 years were interviewed prior to orthognathic surgery treatment. Three color photographs of each patient were evaluated by a panel of 14 experts and panel of 18 laypersons. Each panel of raters were asked to evaluate the facial morphology, facial attractiveness and recommend surgical treatment (independent variables). The dependent variable was the patient's motivation for orthognathic surgery. Outcome measure--Reliability of raters were analyzed using an unweighted Kappa coefficient and a Cronbach alpha coefficient. Correlations and regression analyses were used to quantify the relationship between variables. Expert raters provided reliable ratings of certain morphological features such as excessive gingival display and classification of mandibular facial form and position. Based on the facial photographs both expert and naïve raters agreed on facial attractiveness of patients. The best predictors of patients' motivation for surgery were the naïve profile attractiveness rating and the patients' expected change in self-consciousness. Expert raters provide more reliable ratings on certain morphologic features. However, the layperson's profile attractiveness rating and the patients' expected change in self-consciousness were the best predictors for patients' motivation for surgery. These data suggest that patients' motives for treatment are not necessarily related to objectively determined need. Patients' decision to seek treatment was more correlated to laypersons' rating of attractiveness because they see what other laypersons see, and are directly or indirectly affected by others reactions to their appearance. These findings may provide useful information for clinicians in counseling patients who seek orthognathic surgery.

  20. Cues of Fatigue: Effects of Sleep Deprivation on Facial Appearance

    PubMed Central

    Sundelin, Tina; Lekander, Mats; Kecklund, Göran; Van Someren, Eus J. W.; Olsson, Andreas; Axelsson, John

    2013-01-01

    Study Objective: To investigate the facial cues by which one recognizes that someone is sleep deprived versus not sleep deprived. Design: Experimental laboratory study. Setting: Karolinska Institutet, Stockholm, Sweden. Participants: Forty observers (20 women, mean age 25 ± 5 y) rated 20 facial photographs with respect to fatigue, 10 facial cues, and sadness. The stimulus material consisted of 10 individuals (five women) photographed at 14:30 after normal sleep and after 31 h of sleep deprivation following a night with 5 h of sleep. Measurements: Ratings of fatigue, fatigue-related cues, and sadness in facial photographs. Results: The faces of sleep deprived individuals were perceived as having more hanging eyelids, redder eyes, more swollen eyes, darker circles under the eyes, paler skin, more wrinkles/fine lines, and more droopy corners of the mouth (effects ranging from b = +3 ± 1 to b = +15 ± 1 mm on 100-mm visual analog scales, P < 0.01). The ratings of fatigue were related to glazed eyes and to all the cues affected by sleep deprivation (P < 0.01). Ratings of rash/eczema or tense lips were not significantly affected by sleep deprivation, nor associated with judgements of fatigue. In addition, sleep-deprived individuals looked sadder than after normal sleep, and sadness was related to looking fatigued (P < 0.01). Conclusions: The results show that sleep deprivation affects features relating to the eyes, mouth, and skin, and that these features function as cues of sleep loss to other people. Because these facial regions are important in the communication between humans, facial cues of sleep deprivation and fatigue may carry social consequences for the sleep deprived individual in everyday life. Citation: Sundelin T; Lekander M; Kecklund G; Van Someren EJW; Olsson A; Axelsson J. Cues of fatigue: effects of sleep deprivation on facial appearance. SLEEP 2013;36(9):1355-1360. PMID:23997369

  1. Enhanced facial recognition for thermal imagery using polarimetric imaging.

    PubMed

    Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W

    2014-07-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.

  2. Distinct facial processing in schizophrenia and schizoaffective disorders

    PubMed Central

    Chen, Yue; Cataldo, Andrea; Norton, Daniel J; Ongur, Dost

    2011-01-01

    Although schizophrenia and schizoaffective disorders have both similar and differing clinical features, it is not well understood whether similar or differing pathophysiological processes mediate patients’ cognitive functions. Using psychophysical methods, this study compared the performances of schizophrenia (SZ) patients, patients with schizoaffective disorder (SA), and a healthy control group in two face-related cognitive tasks: emotion discrimination, which tested perception of facial affect, and identity discrimination, which tested perception of non-affective facial features. Compared to healthy controls, SZ patients, but not SA patients, exhibited deficient performance in both fear and happiness discrimination, as well as identity discrimination. SZ patients, but not SA patients, also showed impaired performance in a theory-of-mind task for which emotional expressions are identified based upon the eye regions of face images. This pattern of results suggests distinct processing of face information in schizophrenia and schizoaffective disorders. PMID:21868199

  3. Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven

    2016-08-01

    Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).

  4. Perceptions of Emotion from Facial Expressions are Not Culturally Universal: Evidence from a Remote Culture

    PubMed Central

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-01-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to US emotion concepts embedded in experiments. Since such cues are standard feature in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the US and the Himba ethnic group sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed “universal” pattern, whereas US participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed “universal” pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts. PMID:24708506

  5. Facial width-to-height ratio predicts self-reported dominance and aggression in males and females, but a measure of masculinity does not.

    PubMed

    Lefevre, Carmen E; Etchells, Peter J; Howell, Emma C; Clark, Andrew P; Penton-Voak, Ian S

    2014-10-01

    Recently, associations between facial structure and aggressive behaviour have been reported. Specifically, the facial width-to-height ratio (fWHR) is thought to link to aggression, although it is unclear whether this association is related to a specific dimension of aggression, or to a more generalized concept of dominance behaviour. Similarly, an association has been proposed between facial masculinity and dominant and aggressive behaviour, but, to date, this has not been formally tested. Because masculinity and fWHR are negatively correlated, it is unlikely that both signal similar behaviours. Here, we thus tested these associations and show that: (i) fWHR is related to both self-reported dominance and aggression; (ii) physical aggression, verbal aggression and anger, but not hostility are associated with fWHR; (iii) there is no evidence for a sex difference in associations between fWHR and aggression; and (iv) the facial masculinity index does not predict dominance or aggression. Taken together, these results indicate that fWHR, but not a measure of facial masculinity, cues dominance and specific types of aggression in both sexes. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. Stability of Facial Affective Expressions in Schizophrenia

    PubMed Central

    Fatouros-Bergman, H.; Spang, J.; Merten, J.; Preisler, G.; Werbart, A.

    2012-01-01

    Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS). In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature. PMID:22966449

  7. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  8. Wireless electronic-tattoo for long-term high fidelity facial muscle recordings

    NASA Astrophysics Data System (ADS)

    Inzelberg, Lilah; David Pur, Moshe; Steinberg, Stanislav; Rand, David; Farah, Maroun; Hanein, Yael

    2017-05-01

    Facial surface electromyography (sEMG) is a powerful tool for objective evaluation of human facial expressions and was accordingly suggested in recent years for a wide range of psychological and neurological assessment applications. Owing to technical challenges, in particular the cumbersome gelled electrodes, the use of facial sEMG was so far limited. Using innovative facial temporary tattoos optimized specifically for facial applications, we demonstrate the use of sEMG as a platform for robust identification of facial muscle activation. In particular, differentiation between diverse facial muscles is demonstrated. We also demonstrate a wireless version of the system. The potential use of the presented technology for user-experience monitoring and objective psychological and neurological evaluations is discussed.

  9. Twin infant with lymphatic dysplasia diagnosed with Noonan syndrome by molecular genetic testing.

    PubMed

    Mathur, Deepan; Somashekar, Santhosh; Navarrete, Cristina; Rodriguez, Maria M

    2014-08-01

    Noonan Syndrome is an autosomal dominant disorder characterized by short stature, congenital heart defects, developmental delay, dysmorphic facial features and occasional lymphatic dysplasias. The features of Noonan Syndrome change with age and have variable expression. The diagnosis has historically been based on clinical grounds. We describe a child that was born with congenital refractory chylothorax and subcutaneous edema suspected to be secondary to pulmonary lymphangiectasis. The infant died of respiratory failure and anasarca at 80 days. The autopsy confirmed lymphatic dysplasia in lungs and mesentery. The baby had no dysmorphic facial features and was diagnosed postmortem with Noonan syndrome by genomic DNA sequence analysis as he had a heterozygous mutation for G503R in the PTPN11 gene.

  10. Usefulness of dermoscopy in the diagnosis and monitoring treatment of demodicidosis.

    PubMed

    Friedman, Paula; Sabban, Emilia Cohen; Cabo, Horacio

    2017-01-01

    Demodicidosis is a common infestation and should be considered in the differential diagnosis of recurrent or recalcitrant perioral dermatitis or rosacea-like eruptions of the face. We report on a 34-year-old male, who presented with facial erythema and desquamation accompanied by a pruritic sensation. Dermoscopic examination revealed Demodex tails and Demodex follicular openings, both specific features of this entity. Microscopically, standardized skin surface biopsy test was pathogenic and the patient had positive response to anti-demodectic drugs. To our knowledge, a few reports of the dermatoscopic features of demodicidosis have been published in the literature. Dermoscopy offers a potential new option for a real-time validation of Demodex infestation and a useful tool for monitoring treatment.

  11. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  12. A face for all seasons: Searching for context-specific leadership traits and discovering a general preference for perceived health

    PubMed Central

    Spisak, Brian R.; Blaker, Nancy M.; Lefevre, Carmen E.; Moore, Fhionna R.; Krebbers, Kleis F. B.

    2014-01-01

    Previous research indicates that followers tend to contingently match particular leader qualities to evolutionarily consistent situations requiring collective action (i.e., context-specific cognitive leadership prototypes) and information processing undergoes categorization which ranks certain qualities as first-order context-general and others as second-order context-specific. To further investigate this contingent categorization phenomenon we examined the “attractiveness halo”—a first-order facial cue which significantly biases leadership preferences. While controlling for facial attractiveness, we independently manipulated the underlying facial cues of health and intelligence and then primed participants with four distinct organizational dynamics requiring leadership (i.e., competition vs. cooperation between groups and exploratory change vs. stable exploitation). It was expected that the differing requirements of the four dynamics would contingently select for relatively healthier- or intelligent-looking leaders. We found perceived facial intelligence to be a second-order context-specific trait—for instance, in times requiring a leader to address between-group cooperation—whereas perceived health is significantly preferred across all contexts (i.e., a first-order trait). The results also indicate that facial health positively affects perceived masculinity while facial intelligence negatively affects perceived masculinity, which may partially explain leader choice in some of the environmental contexts. The limitations and a number of implications regarding leadership biases are discussed. PMID:25414653

  13. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis.

    PubMed

    Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp

    2017-01-01

    Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3-10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception.

  14. Configural and Featural Face Processing Influences on Emotion Recognition in Schizophrenia and Bipolar Disorder.

    PubMed

    Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L

    2017-03-01

    Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).

  15. Spectrum of mucocutaneous, ocular and facial features and delineation of novel presentations in 62 classical Ehlers-Danlos syndrome patients.

    PubMed

    Colombi, M; Dordoni, C; Venturini, M; Ciaccio, C; Morlino, S; Chiarelli, N; Zanca, A; Calzavara-Pinton, P; Zoppi, N; Castori, M; Ritelli, M

    2017-12-01

    Classical Ehlers-Danlos syndrome (cEDS) is characterized by marked cutaneous involvement, according to the Villefranche nosology and its 2017 revision. However, the diagnostic flow-chart that prompts molecular testing is still based on experts' opinion rather than systematic published data. Here we report on 62 molecularly characterized cEDS patients with focus on skin, mucosal, facial, and articular manifestations. The major and minor Villefranche criteria, additional 11 mucocutaneous signs and 15 facial dysmorphic traits were ascertained and feature rates compared by sex and age. In our cohort, we did not observe any mandatory clinical sign. Skin hyperextensibility plus atrophic scars was the most frequent combination, whereas generalized joint hypermobility according to the Beighton score decreased with age. Skin was more commonly hyperextensible on elbows, neck, and knees. The sites more frequently affected by abnormal atrophic scarring were knees, face (especially forehead), pretibial area, and elbows. Facial dysmorphism commonly affected midface/orbital areas with epicanthal folds and infraorbital creases more commonly observed in young patients. Our findings suggest that the combination of ≥1 eye dysmorphism and facial/forehead scars may support the diagnosis in children. Minor acquired traits, such as molluscoid pseudotumors, subcutaneous spheroids, and signs of premature skin aging are equally useful in adults. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. [Analysis of different health status based on characteristics of the facial spectrum photometric color].

    PubMed

    Xu, Jiatuo; Wu, Hongjin; Lu, Luming; Tu, Liping; Zhang, Zhifeng; Chen, Xiao

    2012-12-01

    This paper is aimed to observe the difference of facial color of people with different health status by spectral photometric color measuring technique according to the theory of facial color diagnosis in Internal Classic. We gathered the facial color information about the health status of persons in healthy group (183), sub-healthy group (287) and disease group (370) respectively. The information included L, a, b, C values and reflection of different wavelengths in 400-700nm with CM-2600D spectral photometric color measuring instrument on 8 points. The results indicated that overall complexion color values of the people in the three groups were significantly different. The persons in the disease group looked deep dark in features. The people in the sub-healthy group looked pale in features. The loci L, a, b, C values were with varying degrees of significant differences (P < 0.05) at 6 points among the groups, and the central position of the face in all the groups was the position with most significant differences. Comparing the facial color information at the same point of the people in the three groups, we obtained each group's diagnostic special point. There existed diagnostic values in distinguishing disease status and various status of health in some degree by spectral photometric color measuring technique. The present method provides a prosperous quantitative basis for Chinese medical inspection of the complexion diagnosis.

  17. Infant Expressions in an Approach/Withdrawal Framework

    PubMed Central

    Sullivan, Margaret Wolan

    2014-01-01

    Since the introduction of empirical methods for studying facial expression, the interpretation of infant facial expressions has generated much debate. The premise of this paper is that action tendencies of approach and withdrawal constitute a core organizational feature of emotion in humans, promoting coherence of behavior, facial signaling and physiological responses. The approach/withdrawal framework can provide a taxonomy of contexts and the neurobehavioral framework for the systematic, empirical study of individual differences in expression, physiology, and behavior within individuals as well as across contexts over time. By adopting this framework in developmental work on basic emotion processes, it may be possible to better understand the behavioral principles governing facial displays, and how individual differences in them are related to physiology and behavior, function in context. PMID:25412273

  18. Self-Relevance Appraisal Influences Facial Reactions to Emotional Body Expressions

    PubMed Central

    Grèzes, Julie; Philip, Léonor; Chadwick, Michèle; Dezecache, Guillaume; Soussignan, Robert; Conty, Laurence

    2013-01-01

    People display facial reactions when exposed to others' emotional expressions, but exactly what mechanism mediates these facial reactions remains a debated issue. In this study, we manipulated two critical perceptual features that contribute to determining the significance of others' emotional expressions: the direction of attention (toward or away from the observer) and the intensity of the emotional display. Electromyographic activity over the corrugator muscle was recorded while participants observed videos of neutral to angry body expressions. Self-directed bodies induced greater corrugator activity than other-directed bodies; additionally corrugator activity was only influenced by the intensity of anger expresssed by self-directed bodies. These data support the hypothesis that rapid facial reactions are the outcome of self-relevant emotional processing. PMID:23405230

  19. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  20. Overview of pediatric peripheral facial nerve paralysis: analysis of 40 patients.

    PubMed

    Özkale, Yasemin; Erol, İlknur; Saygı, Semra; Yılmaz, İsmail

    2015-02-01

    Peripheral facial nerve paralysis in children might be an alarming sign of serious disease such as malignancy, systemic disease, congenital anomalies, trauma, infection, middle ear surgery, and hypertension. The cases of 40 consecutive children and adolescents who were diagnosed with peripheral facial nerve paralysis at Baskent University Adana Hospital Pediatrics and Pediatric Neurology Unit between January 2010 and January 2013 were retrospectively evaluated. We determined that the most common cause was Bell palsy, followed by infection, tumor lesion, and suspected chemotherapy toxicity. We noted that younger patients had generally poorer outcome than older patients regardless of disease etiology. Peripheral facial nerve paralysis has been reported in many countries in America and Europe; however, knowledge about its clinical features, microbiology, neuroimaging, and treatment in Turkey is incomplete. The present study demonstrated that Bell palsy and infection were the most common etiologies of peripheral facial nerve paralysis. © The Author(s) 2014.

  1. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  2. A novel homozygous HOXB1 mutation in a Turkish family with hereditary congenital facial paresis.

    PubMed

    Sahin, Yavuz; Güngör, Olcay; Ayaz, Akif; Güngör, Gülay; Sahin, Bedia; Yaykasli, Kursad; Ceylaner, Serdar

    2017-02-01

    Hereditary congenital facial paresis (HCFP) is characterized by isolated dysfunction of the facial nerve (CN VII) due to congenital cranial dysinnervation disorders. HCFP has genetic heterogeneity and HOXB1 is the first identified gene. We report the clinical, radiologic and molecular investigations of three patients admitted for HCFP in a large consanguineous Turkish family. High-throughput sequencing and Sanger sequencing of all patients revealed a novel homozygous mutation p.Arg230Trp (c.688C>T) within the HOXB1 gene. The report of the mutation brings the total number of HOXB1 mutations identified in HCFP to four. The results of this study emphasize that in individuals with congenital facial palsy accompanied by hearing loss and dysmorphic facial features, HOXB1 mutation causing HCFP should be kept in mind. Copyright © 2016 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  3. Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language

    PubMed Central

    Benitez-Quiroz, C. Fabian; Gökgöz, Kadir; Wilbur, Ronnie B.; Martinez, Aleix M.

    2014-01-01

    To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic–computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences – Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions – plus their polarities – positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches. PMID:24516528

  4. 22q11.2 deletion syndrome in diverse populations.

    PubMed

    Kruszka, Paul; Addissie, Yonit A; McGinn, Daniel E; Porras, Antonio R; Biggs, Elijah; Share, Matthew; Crowley, T Blaine; Chung, Brian H Y; Mok, Gary T K; Mak, Christopher C Y; Muthukumarasamy, Premala; Thong, Meow-Keong; Sirisena, Nirmala D; Dissanayake, Vajira H W; Paththinige, C Sampath; Prabodha, L B Lahiru; Mishra, Rupesh; Shotelersuk, Vorasuk; Ekure, Ekanem Nsikak; Sokunbi, Ogochukwu Jidechukwu; Kalu, Nnenna; Ferreira, Carlos R; Duncan, Jordann-Mishael; Patil, Siddaramappa Jagdish; Jones, Kelly L; Kaplan, Julie D; Abdul-Rahman, Omar A; Uwineza, Annette; Mutesa, Leon; Moresco, Angélica; Obregon, María Gabriela; Richieri-Costa, Antonio; Gil-da-Silva-Lopes, Vera L; Adeyemo, Adebowale A; Summar, Marshall; Zackai, Elaine H; McDonald-McGinn, Donna M; Linguraru, Marius George; Muenke, Maximilian

    2017-04-01

    22q11.2 deletion syndrome (22q11.2 DS) is the most common microdeletion syndrome and is underdiagnosed in diverse populations. This syndrome has a variable phenotype and affects multiple systems, making early recognition imperative. In this study, individuals from diverse populations with 22q11.2 DS were evaluated clinically and by facial analysis technology. Clinical information from 106 individuals and images from 101 were collected from individuals with 22q11.2 DS from 11 countries; average age was 11.7 and 47% were male. Individuals were grouped into categories of African descent (African), Asian, and Latin American. We found that the phenotype of 22q11.2 DS varied across population groups. Only two findings, congenital heart disease and learning problems, were found in greater than 50% of participants. When comparing the clinical features of 22q11.2 DS in each population, the proportion of individuals within each clinical category was statistically different except for learning problems and ear anomalies (P < 0.05). However, when Africans were removed from analysis, six additional clinical features were found to be independent of ethnicity (P ≥ 0.05). Using facial analysis technology, we compared 156 Caucasians, Africans, Asians, and Latin American individuals with 22q11.2 DS with 156 age and gender matched controls and found that sensitivity and specificity were greater than 96% for all populations. In summary, we present the varied findings from global populations with 22q11.2 DS and demonstrate how facial analysis technology can assist clinicians in making accurate 22q11.2 DS diagnoses. This work will assist in earlier detection and in increasing recognition of 22q11.2 DS throughout the world. © 2017 Wiley Periodicals, Inc.

  5. Colour influences perception of facial emotions but this effect is impaired in healthy ageing and schizophrenia.

    PubMed

    Silver, Henry; Bilker, Warren B

    2015-01-01

    Social cognition is commonly assessed by identification of emotions in facial expressions. Presence of colour, a salient feature of stimuli, might influence emotional face perception. We administered 2 tests of facial emotion recognition, the Emotion Recognition Test (ER40) using colour pictures and the Penn Emotional Acuity Test using monochromatic pictures, to 37 young healthy, 39 old healthy and 37 schizophrenic men. Among young healthy individuals recognition of emotions was more accurate and faster in colour than in monochromatic pictures. Compared to the younger group, older healthy individuals revealed impairment in identification of sad expressions in colour but not monochromatic pictures. Schizophrenia patients showed greater impairment in colour than monochromatic pictures of neutral and sad expressions and overall total score compared to both healthy groups. Patients showed significant correlations between cognitive impairment and perception of emotion in colour but not monochromatic pictures. Colour enhances perception of general emotional clues and this contextual effect is impaired in healthy ageing and schizophrenia. The effects of colour need to be considered in interpreting and comparing studies of emotion perception. Coloured face stimuli may be more sensitive to emotion processing impairments but less selective for emotion-specific information than monochromatic stimuli. This may impact on their utility in early detection of impairments and investigations of underlying mechanisms.

  6. Content Validity of Patient-Reported Outcome Instruments used with Pediatric Patients with Facial Differences: A Systematic Review.

    PubMed

    Wickert, Natasha M; Wong Riff, Karen W Y; Mansour, Mark; Forrest, Christopher R; Goodacre, Timothy E E; Pusic, Andrea L; Klassen, Anne F

    2018-01-01

    Objective The aim of this systematic review was to identify patient-reported outcome (PRO) instruments used in research with children/youth with conditions associated with facial differences to identify the health concepts measured. Design MEDLINE, EMBASE, CINAHL, and PsycINFO were searched from 2004 to 2016 to identify PRO instruments used in acne vulgaris, birthmarks, burns, ear anomalies, facial asymmetries, and facial paralysis patients. We performed a content analysis whereby the items were coded to identify concepts and categorized as positive or negative content or phrasing. Results A total of 7,835 articles were screened; 6 generic and 11 condition-specific PRO instruments were used in 96 publications. Condition-specific instruments were for acne (four), oral health (two), dermatology (one), facial asymmetries (two), microtia (one), and burns (one). The PRO instruments provided 554 items (295 generic; 259 condition specific) that were sorted into 4 domains, 11 subdomains, and 91 health concepts. The most common domain was psychological (n = 224 items). Of the identified items, 76% had negative content or phrasing (e.g., "Because of the way my face looks I wish I had never been born"). Given the small number of items measuring facial appearance (n = 19) and function (n = 22), the PRO instruments reviewed lacked content validity for patients whose condition impacted facial function and/or appearance. Conclusions Treatments can change facial appearance and function. This review draws attention to a problem with content validity in existing PRO instruments. Our team is now developing a new PRO instrument called FACE-Q Kids to address this problem.

  7. Developmental Changes in the Perception of Adult Facial Age

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2007-01-01

    The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2,…

  8. Eruptive Facial Postinflammatory Lentigo: Clinical and Dermatoscopic Features.

    PubMed

    Cabrera, Raul; Puig, Susana; Larrondo, Jorge; Castro, Alex; Valenzuela, Karen; Sabatini, Natalia

    2016-11-01

    The face has not been considered a common site of fixed drug eruption, and the authors lack dermatoscopic studies of this condition on the subject. The authors sought to characterize clinical and dermatoscopic features of 8 cases of an eruptive facial postinflammatory lentigo. The authors conducted a retrospective review of 8 cases with similar clinical and dermatoscopic findings seen from 2 medical centers in 2 countries during 2010-2014. A total of 8 patients (2 males and 6 females) with ages that ranged from 34 to 62 years (mean: 48) presented an abrupt onset of a single facial brown-pink macule, generally asymmetrical, with an average size of 1.9 cm. after ingestion of a nonsteroidal antiinflammatory drugs that lasted for several months. Dermatoscopy mainly showed a pseudonetwork or uniform areas of brown pigmentation, brown or blue-gray dots, red dots and/or telangiectatic vessels. In the epidermis, histopathology showed a mild hydropic degeneration and focal melanin hyperpigmentation. Melanin can be found freely in the dermis or laden in macrophages along with a mild perivascular mononuclear infiltrate. The authors describe eruptive facial postinflammatory lentigo as a new variant of a fixed drug eruption on the face.

  9. Action Unit Models of Facial Expression of Emotion in the Presence of Speech

    PubMed Central

    Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561

  10. Utility of optical facial feature and arm movement tracking systems to enable text communication in critically ill patients who cannot otherwise communicate.

    PubMed

    Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J

    2014-09-01

    Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (P<0.001, 2-tailed independent sample t-test). Both devices may be potentially useful communication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  11. The Dynamic Features of Lip Corners in Genuine and Posed Smiles

    PubMed Central

    Guo, Hui; Zhang, Xiao-Hui; Liang, Jun; Yan, Wen-Jing

    2018-01-01

    The smile is a frequently expressed facial expression that typically conveys a positive emotional state and friendly intent. However, human beings have also learned how to fake smiles, typically by controlling the mouth to provide a genuine-looking expression. This is often accompanied by inaccuracies that can allow others to determine that the smile is false. Mouth movement is one of the most striking features of the smile, yet our understanding of its dynamic elements is still limited. The present study analyzes the dynamic features of lip corners, and considers how they differ between genuine and posed smiles. Employing computer vision techniques, we investigated elements such as the duration, intensity, speed, symmetry of the lip corners, and certain irregularities in genuine and posed smiles obtained from the UvA-NEMO Smile Database. After utilizing the facial analysis tool OpenFace, we further propose a new approach to segmenting the onset, apex, and offset phases of smiles, as well as a means of measuring irregularities and symmetry in facial expressions. We extracted these features according to 2D and 3D coordinates, and conducted an analysis. The results reveal that genuine smiles have higher values for onset, offset, apex, and total durations, as well as offset displacement, and a variable we termed Irregularity-b (the SD of the apex phase) than do posed smiles. Conversely, values tended to be lower for onset and offset Speeds, and Irregularity-a (the rate of peaks), Symmetry-a (the correlation between left and right facial movements), and Symmetry-d (differences in onset frame numbers between the left and right faces). The findings from the present study have been compared to those of previous research, and certain speculations are made. PMID:29515508

  12. Geometric facial comparisons in speed-check photographs.

    PubMed

    Buck, Ursula; Naether, Silvio; Kreutz, Kerstin; Thali, Michael

    2011-11-01

    In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.

  13. A Common Neural Code for Perceived and Inferred Emotion

    PubMed Central

    Saxe, Rebecca

    2014-01-01

    Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. PMID:25429141

  14. A common neural code for perceived and inferred emotion.

    PubMed

    Skerry, Amy E; Saxe, Rebecca

    2014-11-26

    Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion. Copyright © 2014 the authors 0270-6474/14/3315997-12$15.00/0.

  15. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  16. Caring more and knowing more reduces age-related differences in emotion perception.

    PubMed

    Stanley, Jennifer Tehan; Isaacowitz, Derek M

    2015-06-01

    Traditional emotion perception tasks show that older adults are less accurate than are young adults at recognizing facial expressions of emotion. Recently, we proposed that socioemotional factors might explain why older adults seem impaired in lab tasks but less so in everyday life (Isaacowitz & Stanley, 2011). Thus, in the present research we empirically tested whether socioemotional factors such as motivation and familiarity can alter this pattern of age effects. In 1 task, accountability instructions eliminated age differences in the traditional emotion perception task. Using a novel emotion perception paradigm featuring spontaneous dynamic facial expressions of a familiar romantic partner versus a same-age stranger, we found that age differences in emotion perception accuracy were attenuated in the familiar partner condition, relative to the stranger condition. Taken together, the results suggest that both overall accuracy as well as specific patterns of age effects differ appreciably between traditional emotion perception tasks and emotion perception within a socioemotional context. (c) 2015 APA, all rights reserved.

  17. Caring More and Knowing More Reduces Age-Related Differences in Emotion Perception

    PubMed Central

    Stanley, Jennifer Tehan; Isaacowitz, Derek M.

    2015-01-01

    Traditional emotion perception tasks show that older adults are less accurate than young adults at recognizing facial expressions of emotion. Recently, we proposed that socioemotional factors might explain why older adults seem impaired in lab tasks but less so in everyday life (Isaacowitz & Stanley, 2011). Thus, in the present research we empirically tested whether socioemotional factors such as motivation and familiarity can alter this pattern of age effects. In one task, accountability instructions eliminated age differences in the traditional emotion perception task. Using a novel emotion perception paradigm featuring spontaneous dynamic facial expressions of a familiar romantic partner versus a same-age stranger, we found that age differences in emotion perception accuracy were attenuated in the familiar partner condition, relative to the stranger condition. Taken together, the results suggest that both overall accuracy as well as specific patterns of age effects differ appreciably between traditional emotion perception tasks and emotion perception within a socioemotional context. PMID:26030775

  18. Specific Impairments in the Recognition of Emotional Facial Expressions in Parkinson’s Disease

    PubMed Central

    Clark, Uraina S.; Neargarder, Sandy; Cronin-Golomb, Alice

    2008-01-01

    Studies investigating the ability to recognize emotional facial expressions in non-demented individuals with Parkinson’s disease (PD) have yielded equivocal findings. A possible reason for this variability may lie in the confounding of emotion recognition with cognitive task requirements, a confound arising from the lack of a control condition using non-emotional stimuli. The present study examined emotional facial expression recognition abilities in 20 non-demented patients with PD and 23 control participants relative to their performances on a non-emotional landscape categorization test with comparable task requirements. We found that PD participants were normal on the control task but exhibited selective impairments in the recognition of facial emotion, specifically for anger (driven by those with right hemisphere pathology) and surprise (driven by those with left hemisphere pathology), even when controlling for depression level. Male but not female PD participants further displayed specific deficits in the recognition of fearful expressions. We suggest that the neural substrates that may subserve these impairments include the ventral striatum, amygdala, and prefrontal cortices. Finally, we observed that in PD participants, deficiencies in facial emotion recognition correlated with higher levels of interpersonal distress, which calls attention to the significant psychosocial impact that facial emotion recognition impairments may have on individuals with PD. PMID:18485422

  19. Deficits in recognizing disgust facial expressions and Internet addiction: Perceived stress as a mediator.

    PubMed

    Chen, Zhongting; Poon, Kai-Tak; Cheng, Cecilia

    2017-08-01

    Studies have examined social maladjustment among individuals with Internet addiction, but little is known about their deficits in specific social skills and the underlying psychological mechanisms. The present study filled these gaps by (a) establishing a relationship between deficits in facial expression recognition and Internet addiction, and (b) examining the mediating role of perceived stress that explains this hypothesized relationship. Ninety-seven participants completed validated questionnaires that assessed their levels of Internet addiction and perceived stress, and performed a computer-based task that measured their facial expression recognition. The results revealed a positive relationship between deficits in recognizing disgust facial expression and Internet addiction, and this relationship was mediated by perceived stress. However, the same findings did not apply to other facial expressions. Ad hoc analyses showed that recognizing disgust was more difficult than recognizing other facial expressions, reflecting that the former task assesses a social skill that requires cognitive astuteness. The present findings contribute to the literature by identifying a specific social skill deficit related to Internet addiction and by unveiling a psychological mechanism that explains this relationship, thus providing more concrete guidelines for practitioners to strengthen specific social skills that mitigate both perceived stress and Internet addiction. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  20. [Determination of somatotype of man in cranio-facial personality identification].

    PubMed

    2004-01-01

    On the basis of their independent research and through the analysis of published data the authors suggested quantitative criteria for the diagnosis of a somatotype of man by the dimensional features of the face and skull. M. A. Negasheva method, based on the discriminative analysis of 7 measurement features, was used in the individual diagnosis of a somatotype by V. V. Bunaka scheme (somatotypes-pectoral, muscular, abdominal and indefinite). The authors suggest 2 diagnostic models based on the linear and discriminative analysis of 11 and 7 measurement features for the skull. The diagnostic accuracy in case of main male som-atotypes makes 87 and 64.4%, respectively, with the canonic correlations of 0.574 and 0.292. The designed methods can be used in forensic medicine for the cranio-facial and portrait expertise.

Top