Axelrod, Vadim; Yovel, Galit
2010-08-15
Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine
Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang
2014-01-01
Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342
Feature selection from a facial image for distinction of sasang constitution.
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho
2009-09-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.
Feature Selection from a Facial Image for Distinction of Sasang Constitution
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun
2009-01-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013
Performance of a Working Face Recognition Machine using Cortical Thought Theory
1984-12-04
been considered (2). Recommendations from Bledsoe’s study included research on facial - recognition systems that are "completely automatic (remove the...C. L. Location of some facial features . computer, Palo Alto: Panoramic Research, Aug 1966. 2. Bledsoe, W. W. Man-machine facial recognition : Is...34 image?" It would seem - that the location and size of the features left in this contrast-expanded image contain the essential information of facial
Changing facial phenotype in Cohen syndrome: towards clues for an earlier diagnosis.
El Chehadeh-Djebbar, Salima; Blair, Edward; Holder-Espinasse, Muriel; Moncla, Anne; Frances, Anne-Marie; Rio, Marlène; Debray, François-Guillaume; Rump, Patrick; Masurel-Paulet, Alice; Gigot, Nadège; Callier, Patrick; Duplomb, Laurence; Aral, Bernard; Huet, Frédéric; Thauvin-Robinet, Christel; Faivre, Laurence
2013-07-01
Cohen syndrome (CS) is a rare autosomal recessive condition caused by mutations and/or large rearrangements in the VPS13B gene. CS clinical features, including developmental delay, the typical facial gestalt, chorioretinal dystrophy (CRD) and neutropenia, are well described. CS diagnosis is generally raised after school age, when visual disturbances lead to CRD diagnosis and to VPS13B gene testing. This relatively late diagnosis precludes accurate genetic counselling. The aim of this study was to analyse the evolution of CS facial features in the early period of life, particularly before school age (6 years), to find clues for an earlier diagnosis. Photographs of 17 patients with molecularly confirmed CS were analysed, from birth to preschool age. By comparing their facial phenotype when growing, we show that there are no special facial characteristics before 1 year. However, between 2 and 6 years, CS children already share common facial features such as a short neck, a square face with micrognathia and full cheeks, a hypotonic facial appearance, epicanthic folds, long ears with an everted upper part of the auricle and/or a prominent lobe, a relatively short philtrum, a small and open mouth with downturned corners, a thick lower lip and abnormal eye shapes. These early transient facial features evolve to typical CS facial features with aging. These observations emphasize the importance of ophthalmological tests and neutrophil count in children in preschool age presenting with developmental delay, hypotonia and the facial features we described here, for an earlier CS diagnosis.
Recognizing Action Units for Facial Expression Analysis
Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.
2010-01-01
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Novel method to predict body weight in children based on age and morphological facial features.
Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M
2015-04-01
A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
Şimşek-Kiper, Pelin Özlem; Bayram, Yavuz; Ütine, Gülen Eda; Alanay, Yasemin; Boduroğlu, Koray
2014-01-01
Distal 11q deletion, previously known as Jacobsen syndrome, is caused by segmental aneusomy for the distal end of the long arm of chromosome 11. Typical clinical features include facial dysmorphism, mild-to-moderate psychomotor retardation, trigonocephaly, cardiac defects, and thrombocytopenia. There is a significant variability in the range of clinical features. We report herein a five-year-old girl with severe ophthalmological findings, facial dysmorphism, and psychomotor retardation with normal platelet function, in whom a de novo 11q23 deletion was detected, suggesting that distal 11q monosomy should be kept in mind in patients presenting with dysmorphic facial features and psychomotor retardation even in the absence of hematological findings.
Heike, Carrie L; Wallace, Erin; Speltz, Matthew L; Siebold, Babette; Werler, Martha M; Hing, Anne V; Birgfeld, Craig B; Collett, Brent R; Leroux, Brian G; Luquetti, Daniela V
2016-11-01
Craniofacial microsomia (CFM) is a congenital condition with wide phenotypic variability, including hypoplasia of the mandible and external ear. We assembled a cohort of children with facial features within the CFM spectrum and children without known craniofacial anomalies. We sought to develop a standardized approach to assess and describe the facial characteristics of the study cohort, using multiple sources of information gathered over the course of this longitudinal study and to create case subgroups with shared phenotypic features. Participants were enrolled between 1996 and 2002. We classified the facial phenotype from photographs, ratings using a modified version of the Orbital, Ear, Mandible, Nerve, Soft tissue (OMENS) pictorial system, data from medical record abstraction, and health history questionnaires. The participant sample included 142 cases and 290 controls. The average age was 13.5 years (standard deviation, 1.3 years; range, 11.1-17.1 years). Sixty-one percent of cases were male, 74% were white non-Hispanic. Among cases, the most common features were microtia (66%) and mandibular hypoplasia (50%). Case subgroups with meaningful group definitions included: (1) microtia without other CFM-related features (n = 24), (2) microtia with mandibular hypoplasia (n = 46), (3) other combinations of CFM- related facial features (n = 51), and (4) atypical features (n = 21). We developed a standardized approach for integrating multiple data sources to phenotype individuals with CFM, and created subgroups based on clinically-meaningful, shared characteristics. We hope that this system can be used to explore associations between phenotype and clinical outcomes of children with CFM and to identify the etiology of CFM. Birth Defects Research (Part A) 106:915-926, 2016.© 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2013-10-01
In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.
Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features
ERIC Educational Resources Information Center
Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu
2011-01-01
Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…
Complexion Care; Cosmetology 2: 9207.02.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
Requiring 135 hours of classroom-laboratory instruction, the course develops skill in giving facial treatments including all massage manipulations, along with knowing the purpose of facials and of the anatomy and physiology related to it. The application of make-up for all types of skin and facial features is an integral part of the program. The…
Extraction and representation of common feature from uncertain facial expressions with cloud model.
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
2017-12-01
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
Image ratio features for facial expression recognition application.
Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu
2010-06-01
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Recent Advances in Face Lift to Achieve Facial Balance.
Ilankovan, Velupillai
2017-03-01
Facial balance is achieved by correction of facial proportions and the facial contour. Ageing affects this balance in addition to other factors. We have strived to inform all the recent advances in providing this balance. The anatomy of ageing including various changed in clinical features are described. The procedures are explained on the basis of the upper, middle and lower face. Different face lift, neck lift procedures with innovative techniques are demonstrated. The aim is to provide an unoperated balanced facial proportion with zero complication.
Evaluation of facial expression in acute pain in cats.
Holden, E; Calvo, G; Collins, M; Bell, A; Reid, J; Scott, E M; Nolan, A M
2014-12-01
To describe the development of a facial expression tool differentiating pain-free cats from those in acute pain. Observers shown facial images from painful and pain-free cats were asked to identify if they were in pain or not. From facial images, anatomical landmarks were identified and distances between these were mapped. Selected distances underwent statistical analysis to identify features discriminating pain-free and painful cats. Additionally, thumbnail photographs were reviewed by two experts to identify discriminating facial features between the groups. Observers (n = 68) had difficulty in identifying pain-free from painful cats, with only 13% of observers being able to discriminate more than 80% of painful cats. Analysis of 78 facial landmarks and 80 distances identified six significant factors differentiating pain-free and painful faces including ear position and areas around the mouth/muzzle. Standardised mouth and ear distances when combined showed excellent discrimination properties, correctly differentiating pain-free and painful cats in 98% of cases. Expert review supported these findings and a cartoon-type picture scale was developed from thumbnail images. Initial investigation into facial features of painful and pain-free cats suggests potentially good discrimination properties of facial images. Further testing is required for development of a clinical tool. © 2014 British Small Animal Veterinary Association.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Li, Qiang; Zhou, Xu; Wang, Yue; Qian, Jin; Zhang, Qingguo
2018-05-15
Although facial paralysis is a fundamental feature of hemifacial microsomia, the frequency and distribution of nerve abnormalities in patients with hemifacial microsomia remain unclear. In this study, the authors classified 1125 cases with microtia (including 339 patients with hemifacial microsomia and 786 with isolated microtia) according to Orbital Distortion Mandibular Hypoplasia Ear Anomaly Nerve Involvement Soft Tissue Dependency (OMENS) scheme. Then, the authors performed an independent analysis to describe the distribution feature of nerve abnormalities and reveal the possible relationships between facial paralysis and the other 4 fundamental features in the OMENS system. Results revealed that facial paralysis is present 23.9% of patients with hemifacial microsomia. The frontal-temporal branch is the most vulnerable branch in the total 1125 cases with microtia. The occurrence of facial paralysis is positively correlated with mandibular hypoplasia and soft tissue deficiency both in the total 1125 cases and the hemifacial microsomia patients. Orbital asymmetry is related to facial paralysis only in the total microtia cases, and ear deformity is related to facial paralysis only in hemifacial microsomia patients. No significant association was found between the severity of facial paralysis and any of the other 4 OMENS anomalies. These data suggest that the occurrence of facial paralysis may be associated with other OMENS abnormalities. The presence of serious mandibular hypoplasia or soft tissue deficiency should alert the clinician to a high possibility but not a high severity of facial paralysis.
Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V
2014-07-01
Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.
The extraction and use of facial features in low bit-rate visual communication.
Pearson, D
1992-01-29
A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.
Antenatal diagnosis of complete facial duplication--a case report of a rare craniofacial defect.
Rai, V S; Gaffney, G; Manning, N; Pirrone, P G; Chamberlain, P F
1998-06-01
We report a case of the prenatal sonographic detection of facial duplication, the diprosopus abnormality, in a twin pregnancy. The characteristic sonographic features of the condition include duplication of eyes, mouth, nose and both mid- and anterior intracranial structures. A heart-shaped abnormality of the cranial vault should prompt more detailed examination for other supportive features of this rare condition.
Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).
Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M
2017-02-01
Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
An audiovisual emotion recognition system
NASA Astrophysics Data System (ADS)
Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun
2007-12-01
Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.
Alagille syndrome in a Vietnamese cohort: mutation analysis and assessment of facial features.
Lin, Henry C; Le Hoang, Phuc; Hutchinson, Anne; Chao, Grace; Gerfen, Jennifer; Loomes, Kathleen M; Krantz, Ian; Kamath, Binita M; Spinner, Nancy B
2012-05-01
Alagille syndrome (ALGS, OMIM #118450) is an autosomal dominant disorder that affects multiple organ systems including the liver, heart, eyes, vertebrae, and face. ALGS is caused by mutations in one of two genes in the Notch Signaling Pathway, Jagged1 (JAG1) or NOTCH2. In this study, analysis of 21 Vietnamese ALGS individuals led to the identification of 19 different mutations (18 JAG1 and 1 NOTCH2), 17 of which are novel, including the third reported NOTCH2 mutation in Alagille Syndrome. The spectrum of JAG1 mutations in the Vietnamese patients is similar to that previously reported, including nine frameshift, three missense, two splice site, one nonsense, two whole gene, and one partial gene deletion. The missense mutations are all likely to be disease causing, as two are loss of cysteines (C22R and C78G) and the third creates a cryptic splice site in exon 9 (G386R). No correlation between genotype and phenotype was observed. Assessment of clinical phenotype revealed that skeletal manifestations occur with a higher frequency than in previously reported Alagille cohorts. Facial features were difficult to assess and a Vietnamese pediatric gastroenterologist was only able to identify the facial phenotype in 61% of the cohort. To assess the agreement among North American dysmorphologists at detecting the presence of ALGS facial features in the Vietnamese patients, 37 clinical dysmorphologists evaluated a photographic panel of 20 Vietnamese children with and without ALGS. The dysmorphologists were unable to identify the individuals with ALGS in the majority of cases, suggesting that evaluation of facial features should not be used in the diagnosis of ALGS in this population. This is the first report of mutations and phenotypic spectrum of ALGS in a Vietnamese population. Copyright © 2012 Wiley Periodicals, Inc.
AAEM case report #26: seventh cranial neuropathy.
Gilchrist, J M
1993-05-01
A 25-year-old man with acute, bilateral facial palsies is presented. He had a lymphocytic meningitis, history of tick bites, and lived in an area endemic for Lyme disease, which was ultimately confirmed by serology. Electrodiagnostic investigation included facial motor nerve study, blink reflex and electromyography of facial muscles, which were indicative of a neurapraxic lesion on the right and an axonopathic lesion on the left. The clinical course was consistent with these findings as the right side fully recovered and the left remained plegic. The clinical features of Lyme associated facial neuritis are reviewed, as is the electrodiagnostic evaluation of facial palsy.
Is NF-1 gene deletion the molecular mechanism of neurofibromatosis type 1 with destinctive facies?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leppig, K.A.; Stephens, K.G.; Viskochill, D.
We have studied a patient with neurofibromatosis type 1 and unusual facial features using fluorescence in situ hybridization (FISH) and found that the patient had a deletion that minimially encompasses exon 2-11 of the NF-1 gene. The patient was one of two individuals initially described by Kaplan and Rosenblatt who suggested that another condition aside from neurofibromatosis type 1 may account for the unusual facial features observed in these patients with neurofibromatosis type 1. FISH studies were performed using a P1 clone probe, P1-9, which contains exons 2-11 of the NF-1 gene on chromosomes prepared from the patients. In allmore » 20 metaphase cells analyzed, one of the chromosome 17 homologues was deleted for the P1-9 probe. Therefore, this patient had neurofibromatosis type 1 and unusual facial features as the result of a deletion which minimally includes exons 2-11 of the NF-1 gene. The extent of the deletion is being mapped by FISH and somatic cell hybrid analysis. The patient studied was a 7-year-old male with mild developmental delays, normal growth parameters, and physical findings consistent with neurofibromatosis type 1, including multiple cafe au lait spots, several curaneous neurofibroma, and speckling of the irises. In addition, his unusual facial features consisted of telecanthus, antimongoloid slant of the palpebral fissures, a broad base of the nose, low set and mildly posteriorly rotated ears, thick helices, high arched palate, short and pointed chin, and low posterior hairline. We propose that deletions of the NF-1 gene and/or contiguous genes are the etiology of neurofibromatosis type 1 and unusual facial features. This particular facial appearance was inherited from the patient`s mother and has been described in other individuals with neurofibromatosis type 1. We are using FISH to rapidly screen patients with this phenotype for large deletions involving the NF-1 gene and flanking DNA sequences.« less
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Pose-variant facial expression recognition using an embedded image system
NASA Astrophysics Data System (ADS)
Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung
2008-12-01
In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.
Dynamic facial expression recognition based on geometric and texture features
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Zengfu
2018-04-01
Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.
Facial paralysis for the plastic surgeon.
Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory Rd; Wirth, Garrett A
2007-01-01
Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis.The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain.With respect to facial paralysis, surgeons tend to focus on the surgical, or 'hands-on', aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper.
Facial paralysis for the plastic surgeon
Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory RD; Wirth, Garrett A
2007-01-01
Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis. The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain. With respect to facial paralysis, surgeons tend to focus on the surgical, or ‘hands-on’, aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper. PMID:19554190
Neural correlates of processing facial identity based on features versus their spacing.
Maurer, D; O'Craven, K M; Le Grand, R; Mondloch, C J; Springer, M V; Lewis, T L; Grady, C L
2007-04-08
Adults' expertise in recognizing facial identity involves encoding subtle differences among faces in the shape of individual facial features (featural processing) and in the spacing among features (a type of configural processing called sensitivity to second-order relations). We used fMRI to investigate the neural mechanisms that differentiate these two types of processing. Participants made same/different judgments about pairs of faces that differed only in the shape of the eyes and mouth, with minimal differences in spacing (featural blocks), or pairs of faces that had identical features but differed in the positions of those features (spacing blocks). From a localizer scan with faces, objects, and houses, we identified regions with comparatively more activity for faces, including the fusiform face area (FFA) in the right fusiform gyrus, other extrastriate regions, and prefrontal cortices. Contrasts between the featural and spacing conditions revealed distributed patterns of activity differentiating the two conditions. A region of the right fusiform gyrus (near but not overlapping the localized FFA) showed greater activity during the spacing task, along with multiple areas of right frontal cortex, whereas left prefrontal activity increased for featural processing. These patterns of activity were not related to differences in performance between the two tasks. The results indicate that the processing of facial features is distinct from the processing of second-order relations in faces, and that these functions are mediated by separate and lateralized networks involving the right fusiform gyrus, although the FFA as defined from a localizer scan is not differentially involved.
NATIONAL PREPAREDNESS: Technologies to Secure Federal Buildings
2002-04-25
Medium, some resistance based on sensitivity of eye Facial recognition Facial features are captured and compared Dependent on lighting, positioning...two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are...an adjacent feature. Attachment I—Access Control Technologies: Biometrics Facial Recognition How the technology works
Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.
Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu
2011-06-01
Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.
Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal
2016-06-01
Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.
Attractiveness as a Function of Skin Tone and Facial Features: Evidence from Categorization Studies.
Stepanova, Elena V; Strube, Michael J
2018-01-01
Participants rated the attractiveness and racial typicality of male faces varying in their facial features from Afrocentric to Eurocentric and in skin tone from dark to light in two experiments. Experiment 1 provided evidence that facial features and skin tone have an interactive effect on perceptions of attractiveness and mixed-race faces are perceived as more attractive than single-race faces. Experiment 2 further confirmed that faces with medium levels of skin tone and facial features are perceived as more attractive than faces with extreme levels of these factors. Black phenotypes (combinations of dark skin tone and Afrocentric facial features) were rated as more attractive than White phenotypes (combinations of light skin tone and Eurocentric facial features); ambiguous faces (combinations of Afrocentric and Eurocentric physiognomy) with medium levels of skin tone were rated as the most attractive in Experiment 2. Perceptions of attractiveness were relatively independent of racial categorization in both experiments.
Segmentation of human face using gradient-based approach
NASA Astrophysics Data System (ADS)
Baskan, Selin; Bulut, M. Mete; Atalay, Volkan
2001-04-01
This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.
Interpretation of Appearance: The Effect of Facial Features on First Impressions and Personality
Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne
2014-01-01
Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner. PMID:25233221
Interpretation of appearance: the effect of facial features on first impressions and personality.
Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne
2014-01-01
Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.
iFER: facial expression recognition using automatically selected geometric eye and eyebrow features
NASA Astrophysics Data System (ADS)
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
2018-03-01
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Hennekam lymphangiectasia syndrome
Lakshminarayana, G.; Mathew, A.; Rajesh, R.; Kurien, G.; Unni, V. N.
2011-01-01
Hennekam lymphangiectasia syndrome is a rare disorder comprising of intestinal and renal lymphangiectasia, dysmorphic facial appearance and mental retardation. The facial features include hypertelorism with a wide, flat nasal bridge, epicanthic folds, small mouth and small ears. We describe a case of a multigravida with bad obstetric history and characteristic facial and dental anomalies and bilateral renal lymphangiectasia. To our knowledge this is the first case of Hennekam lymphangiectasia syndrome with anodontia to be reported from India. PMID:22022089
FaceWarehouse: a 3D facial expression database for visual computing.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
2014-03-01
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming
2013-12-01
Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.
Cephalometric features in isolated growth hormone deficiency.
Oliveira-Neto, Luiz Alves; Melo, Mariade de Fátima B; Franco, Alexandre A; Oliveira, Alaíde H A; Souza, Anita H O; Valença, Eugênia H O; Britto, Isabela M P A; Salvatori, Roberto; Aguiar-Oliveira, Manuel H
2011-07-01
To analyze cephalometric features in adults with isolated growth hormone (GH) deficiency (IGHD). Nine adult IGHD individuals (7 males and 2 females; mean age, 37.8 ± 13.8 years) underwent a cross-sectional cephalometric study, including 9 linear and 5 angular measurements. Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were calculated. To pool cephalometric measurements in both genders, results were normalized by standard deviation scores (SDS), using the population means from an atlas of the normal Brazilian population. All linear measurements were reduced in IGHD subjects. Total maxillary length was the most reduced parameter (-6.5 ± 1.7), followed by a cluster of six measurements: posterior cranial base length (-4.9 ± 1.1), total mandibular length (-4.4 ± 0.7), total posterior facial height (-4.4 ± 1.1), total anterior facial height (-4.3 ± 0.9), mandibular corpus length (-4.2 ± 0.8), and anterior cranial base length (-4.1 ± 1.7). Less affected measurements were lower-anterior facial height (-2.7 ± 0.7) and mandibular ramus height (-2.5 ± 1.5). SDS angular measurements were in the normal range, except for increased gonial angle (+2.5 ± 1.1). Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were not different from those of the reference group. Congenital, untreated IGHD causes reduction of all linear measurements of craniofacial growth, particularly total maxillary length. Angular measurements and facial height ratios are less affected, suggesting that lGHD causes proportional blunting of craniofacial growth.
Ding, Liya; Martinez, Aleix M
2010-11-01
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
Enhancing facial features by using clear facial features
NASA Astrophysics Data System (ADS)
Rofoo, Fanar Fareed Hanna
2017-09-01
The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.
Evidence of a Shift from Featural to Configural Face Processing in Infancy
ERIC Educational Resources Information Center
Schwarzer, Gudrun; Zauner, Nicola; Jovanovic, Bianca
2007-01-01
Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After…
Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui
2014-01-01
Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898
Characterization and recognition of mixed emotional expressions in thermal face image
NASA Astrophysics Data System (ADS)
Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita
2016-05-01
Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.
Contextual interference processing during fast categorisations of facial expressions.
Frühholz, Sascha; Trautmann-Lengsfeld, Sina A; Herrmann, Manfred
2011-09-01
We examined interference effects of emotionally associated background colours during fast valence categorisations of negative, neutral and positive expressions. According to implicitly learned colour-emotion associations, facial expressions were presented with colours that either matched the valence of these expressions or not. Experiment 1 included infrequent non-matching trials and Experiment 2 a balanced ratio of matching and non-matching trials. Besides general modulatory effects of contextual features on the processing of facial expressions, we found differential effects depending on the valance of target facial expressions. Whereas performance accuracy was mainly affected for neutral expressions, performance speed was specifically modulated by emotional expressions indicating some susceptibility of emotional expressions to contextual features. Experiment 3 used two further colour-emotion combinations, but revealed only marginal interference effects most likely due to missing colour-emotion associations. The results are discussed with respect to inherent processing demands of emotional and neutral expressions and their susceptibility to contextual interference.
What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.
Fitousi, Daniel
2017-07-01
A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.
Perceived Attractiveness, Facial Features, and African Self-Consciousness.
ERIC Educational Resources Information Center
Chambers, John W., Jr.; And Others
1994-01-01
Investigated relationships between perceived attractiveness, facial features, and African self-consciousness (ASC) among 149 African American college students. As predicted, high ASC subjects used more positive adjectives in descriptions of strong African facial features than did medium or low ASC subjects. Results are discussed in the context of…
Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor
Shu, Ting; Zhang, Bob; Tang, Yuan Yan
2017-01-01
Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716
[Endoscopic treatment of small osteoma of nasal sinuses manifested as nasal and facial pain].
Li, Yu; Zheng, Tianqi; Li, Zhong; Deng, Hongyuan; Guo, Chaoxian
2015-12-01
To discuss the clinical features, diagnosis and endoscopic surgical intervention for small steoma of nasal sinuses causing nasal and facial pain. A retrospective review was performed on 21 patients with nasal and facial pain caused by small osteoma of nasal sinuses, and nasal endoscopic surgery was included in the treatment of all cases. The nasal and facial pain of all the patients was relieved. Except for one ase exhibiting periorbital bruise after operation, the other patients showed no postoperative complications. Nasal and facial pain caused by small osteoma of nasal sinuses was clinically rare, mostly due to the neuropathic pain of nose and face caused by local compression resulting from the expansion of osteoma. Early diagnosis and operative treatment can significantly relieve nasal and facial pain.
Signs of Facial Aging in Men in a Diverse, Multinational Study: Timing and Preventive Behaviors.
Rossi, Anthony M; Eviatar, Joseph; Green, Jeremy B; Anolik, Robert; Eidelman, Michael; Keaney, Terrence C; Narurkar, Vic; Jones, Derek; Kolodziejczyk, Julia; Drinkwater, Adrienne; Gallagher, Conor J
2017-11-01
Men are a growing patient population in aesthetic medicine and are increasingly seeking minimally invasive cosmetic procedures. To examine differences in the timing of facial aging and in the prevalence of preventive facial aging behaviors in men by race/ethnicity. Men aged 18 to 75 years in the United States, Canada, United Kingdom, and Australia rated their features using photonumeric rating scales for 10 facial aging characteristics. Impact of race/ethnicity (Caucasian, black, Asian, Hispanic) on severity of each feature was assessed. Subjects also reported the frequency of dermatologic facial product use. The study included 819 men. Glabellar lines, crow's feet lines, and nasolabial folds showed the greatest change with age. Caucasian men reported more severe signs of aging and earlier onset, by 10 to 20 years, compared with Asian, Hispanic, and, particularly, black men. In all racial/ethnic groups, most men did not regularly engage in basic, antiaging preventive behaviors, such as use of sunscreen. Findings from this study conducted in a globally diverse sample may guide clinical discussions with men about the prevention and treatment of signs of facial aging, to help men of all races/ethnicities achieve their desired aesthetic outcomes.
Variation of facial features among three African populations: Body height match analyses.
Taura, M G; Adamu, L H; Gudaji, A
2017-01-01
Body height is one of the variables that show a correlation with facial craniometry. Here we seek to discriminate the three populations (Nigerians, Ugandans and Kenyans) using facial craniometry based on different categories of body height of adult males. A total of 513 individuals comprising 234 Nigerians, 169 Ugandans and 110 Kenyans with mean age of 25.27, s=5.13 (18-40 years) participated. Paired and unpaired facial features were measured using direct craniometry. Multivariate and stepwise discriminate function analyses were used for differentiation of the three populations. The result showed significant overall facial differences among the three populations in all the body height categories. Skull height, total facial height, outer canthal distance, exophthalmometry, right ear width and nasal length were significantly different among the three different populations irrespective of body height categories. Other variables were sensitive to body height. Stepwise discriminant function analyses included maximum of six variables for better discrimination between the three populations. The single best discriminator of the groups was total facial height, however, for body height >1.70m the single best discriminator was nasal length. Most of the variables were better used with function 1, hence, better discrimination than function 2. In conclusion, adult body height in addition to other factors such as age, sex, and ethnicity should be considered in making decision on facial craniometry. However, not all the facial linear dimensions were sensitive to body height. Copyright © 2016 Elsevier GmbH. All rights reserved.
Facial soft biometric features for forensic face recognition.
Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier
2015-12-01
This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.
Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo
2015-06-01
The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.
Reverse engineering the face space: Discovering the critical features for face identification.
Abudarham, Naphtali; Yovel, Galit
2016-01-01
How do we identify people? What are the critical facial features that define an identity and determine whether two faces belong to the same person or different people? To answer these questions, we applied the face space framework, according to which faces are represented as points in a multidimensional feature space, such that face space distances are correlated with perceptual similarities between faces. In particular, we developed a novel method that allowed us to reveal the critical dimensions (i.e., critical features) of the face space. To that end, we constructed a concrete face space, which included 20 facial features of natural face images, and asked human observers to evaluate feature values (e.g., how thick are the lips). Next, we systematically and quantitatively changed facial features, and measured the perceptual effects of these manipulations. We found that critical features were those for which participants have high perceptual sensitivity (PS) for detecting differences across identities (e.g., which of two faces has thicker lips). Furthermore, these high PS features vary minimally across different views of the same identity, suggesting high PS features support face recognition across different images of the same face. The methods described here set an infrastructure for discovering the critical features of other face categories not studied here (e.g., Asians, familiar) as well as other aspects of face processing, such as attractiveness or trait inferences.
Hemifacial microsomia in cat-eye syndrome: 22q11.1-q11.21 as candidate loci for facial symmetry.
Quintero-Rivera, Fabiola; Martinez-Agosto, Julian A
2013-08-01
Cat-Eye syndrome (CES), (OMIM 115470) also known as chromosome 22 partial tetrasomy or inverted duplicated 22q11, was first reported by Haab [1879] based on the primary features of eye coloboma and anal atresia. However, >60% of the patients lack these primary features. Here, we present a 9-month-old female who at birth was noted to have multiple defects, including facial asymmetry with asymmetric retrognathia, bilateral mandibular hypoplasia, branchial cleft sinus, right-sided muscular torticollis, esotropia, and an atretic right ear canal with low-to-moderate sensorineural hearing loss, bilateral preauricular ear tag/pits, and two skin tags on her left cheek. There were no signs of any colobomas or anal atresia. Hemifacial microsomia (HFM) was suspected clinically. Chromosome studies and FISH identified an extra marker originated from 22q11 consistent with CES, and this was confirmed by aCGH. This report expands the phenotypic variability of CES and includes partial tetrasomy of 22q11.1-q11.21 in the differential diagnosis of HFM. In addition, our case as well as the previous association of 22q11.2 deletions and duplications with facial asymmetry and features of HFM, supports the hypothesis that this chromosome region harbors genes important in the regulation of body plan symmetry, and in particular facial harmony. Copyright © 2013 Wiley Periodicals, Inc.
Recognition of children on age-different images: Facial morphology and age-stable features.
Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina
2017-07-01
The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Person-independent facial expression analysis by fusing multiscale cell features
NASA Astrophysics Data System (ADS)
Zhou, Lubing; Wang, Han
2013-03-01
Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.
Modeling 3D Facial Shape from DNA
Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.
2014-01-01
Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127
Young Children's Ability to Match Facial Features Typical of Race.
ERIC Educational Resources Information Center
Lacoste, Ronald J.
This study examined (1) the ability of 3- and 4-year-old children to racially classify Negro and Caucasian facial features in the absence of skin color as a racial cue; and (2) the relative value attached to the facial features of Negro and Caucasian races. Subjects were 21 middle income, Caucasian children from a privately owned nursery school in…
NASA Astrophysics Data System (ADS)
Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki
2017-09-01
Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.
FaceTOON: a unified platform for feature-based cartoon expression generation
NASA Astrophysics Data System (ADS)
Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine
2008-02-01
This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodship, J.; Lynch, S.; Brown, J.
1994-09-01
DiGeorge syndrome (DGS) is a congenital anomaly consisting of cardiac defects, aplasia or hypoplasia of the thymus and parathroid glands, and dysmorphic facial features. The majority of DGS cases have a submicroscopic deletion within chromosome 22q11. However there have been a number of reports of DGS in association with other chromosomal abnormalities including four cases with chromosome 10p deletions. We describe a further 10p deletion case and suggest that the facial features in children with DGS due to deletions of 10p are different from those associated with chromosome 22 deletions. The propositus was born at 39 weeks gestation to unrelatedmore » caucasian parents, birth weight 2580g (10th centile) and was noted to be dysmorphic and cyanosed shortly after birth. The main dysmorphic facial features were a broad nasal bridge with very short palpebral fissures. Echocardiography revealed a large subsortic VSD and overriding aorta. She had a low ionised calcium and low parathroid hormone level. T cell subsets and PHA response were normal. Abdominal ultrasound showed duplex kidneys and on further investigation she was found to have reflux and raised plasma creatinine. She had an anteriorly placed anus. Her karyotype was 46,XX,-10,+der(10)t(3;10)(p23;p13)mat. The dysmorphic facial features in this baby are strikingly similar to those noted by Bridgeman and Butler in child with DGS as the result of a 10p deletion and distinct from the face seen in children with DiGeorge syndrome resulting from interstitial chromosome 22 deletions.« less
The development of automated behavior analysis software
NASA Astrophysics Data System (ADS)
Jaana, Yuki; Prima, Oky Dicky A.; Imabuchi, Takashi; Ito, Hisayoshi; Hosogoe, Kumiko
2015-03-01
The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.
Yovel, Galit
2009-11-01
It is often argued that picture-plane face inversion impairs discrimination of the spacing among face features to a greater extent than the identity of the facial features. However, several recent studies have reported similar inversion effects for both types of face manipulations. In a recent review, Rossion (2008) claimed that similar inversion effects for spacing and features are due to methodological and conceptual shortcomings and that data still support the idea that inversion impairs the discrimination of features less than that of the spacing among them. Here I will claim that when facial features differ primarily in shape, the effect of inversion on features is not smaller than the one on spacing. It is when color/contrast information is added to facial features that the inversion effect on features decreases. This obvious observation accounts for the discrepancy in the literature and suggests that the large inversion effect that was found for features that differ in shape is not a methodological artifact. These findings together with other data that are discussed are consistent with the idea that the shape of facial features and the spacing among them are integrated rather than dissociated in the holistic representation of faces.
Brielmann, Aenne A; Bülthoff, Isabelle; Armann, Regine
2014-07-01
Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: (1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? (2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face's race. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Doğan, Özlem Akgün; Şimşek Kiper, Pelin Özlem; Utine, Gülen Eda; Alikaşifoğlu, Mehmet; Boduroğlu, Koray
2017-03-01
Williams syndrome (OMIM #194050) is a rare, well-recognized, multisystemic genetic condition affecting approximately 1/7,500 individuals. There are no marked regional differences in the incidence of Williams syndrome. The syndrome is caused by a hemizygous deletion of approximately 28 genes, including ELN on chromosome 7q11.2. Prenatal-onset growth retardation, distinct facial appearance, cardiovascular abnormalities, and unique hypersocial behavior are among the most common clinical features. Here, we report the case of a patient referred to us with distinct facial features and intellectual disability, who was diagnosed with Williams syndrome at the age of 37 years. Our aim is to increase awareness regarding the diagnostic features and complications of this recognizable syndrome among adult health care providers. Williams syndrome is usually diagnosed during infancy or childhood, but in the absence of classical findings, such as cardiovascular anomalies, hypercalcemia, and cognitive impairment, the diagnosis could be delayed. Due to the multisystemic and progressive nature of the syndrome, accurate diagnosis is critical for appropriate care and screening for the associated morbidities that may affect the patient's health and well-being.
Faqeih, Eissa; Al-Akash, Samhar I; Sakati, Nadia; Teebi, Prof Ahmad S
2007-09-01
We report on four siblings (three males, one female) born to first cousin Arab parents with the constellation of distal renal tubular acidosis (RTA), small kidneys, nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial features. They presented with early developmental delay with subsequent severe mental, behavioral and social impairment and autistic-like features. Their facial features are unique with prominent cheeks, well-defined philtrum, large bulbous nose, V-shaped upper lip border, full lower lip, open mouth with protruded tongue, and pits on the ear lobule. All had proteinuria, hypercalciuria, hypercalcemia, and normal anion-gap metabolic acidosis. Renal ultrasound examinations revealed small kidneys, with varying degrees of hyperechogenicity and nephrocalcinosis. Additional findings included dilated ventricles and cerebral demyelination on brain imaging studies. Other than distal RTA, common causes of nephrocalcinosis were excluded. The constellation of features in this family currently likely represents a possibly new autosomal recessive syndrome providing further evidence of heterogeneity of nephrocalcinosis syndromes. Copyright 2007 Wiley-Liss, Inc.
Automatic facial animation parameters extraction in MPEG-4 visual communication
NASA Astrophysics Data System (ADS)
Yang, Chenggen; Gong, Wanwei; Yu, Lu
2002-01-01
Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.
ERIC Educational Resources Information Center
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
2014-01-01
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…
Oral-facial-digital syndrome type 1 with hypothalamic hamartoma and Dandy-Walker malformation.
Azukizawa, Takayuki; Yamamoto, Masahito; Narumiya, Seirou; Takano, Tomoyuki
2013-04-01
We report a 1-year-old girl with oral-facial-digital syndrome type 1 with multiple malformations of the oral cavity, face, digits, and central nervous system, including agenesis of the corpus callosum, the presence of intracerebral cysts, and agenesis of the cerebellar vermis, which is associated with the subarachnoid space separating the medial sides of the cerebellar hemispheres. This child also had a hypothalamic hamartoma and a Dandy-Walker malformation, which have not been reported previously. The clinical features, including cerebral malformations, in several types of oral-facial-digital syndrome, overlap with each other. Further accumulation of new case reports and identification of new genetic mutations in oral-facial-digital syndrome may provide novel and important insights into the genetic mechanisms of this syndrome. Copyright © 2013 Elsevier Inc. All rights reserved.
Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis
Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana
2012-01-01
Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153
Three-dimensional analysis of facial morphology.
Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng
2014-09-01
The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.
In-the-wild facial expression recognition in extreme poses
NASA Astrophysics Data System (ADS)
Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.
A Real-Time Interactive System for Facial Makeup of Peking Opera
NASA Astrophysics Data System (ADS)
Cai, Feilong; Yu, Jinhui
In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.
Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.
Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal
2018-04-23
Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Lee, Juhun; Ku, Brian; Combs, Patrick D.; Da Silveira, Adriana. C.; Markey, Mia K.
2017-06-01
Cleft lip with or without cleft palate (CL ± P) is one of the most common congenital facial deformities worldwide. To minimize negative social consequences of CL ± P, reconstructive surgery is conducted to modify the face to a more normal appearance. Each race/ethnic group requires its own facial norm data, yet there are no existing facial norm data for Hispanic/Latino White children. The objective of this paper is to identify measures of facial appearance relevant for planning reconstructive surgery for CL ± P of Hispanic/Latino White children. Quantitative analysis was conducted on 3D facial images of 82 (41 girls, 41 boys) healthy Hispanic/Latino White children whose ages ranged from 7 to 12 years. Twenty-eight facial anthropometric features related to CL ± P (mainly in the nasal and mouth area) were measured from 3D facial images. In addition, facial aesthetic ratings were obtained from 16 non-clinical observers for the same 3D facial images using a 7-point Likert scale. Pearson correlation analysis was conducted to find features that were correlated with the panel ratings of observers. Boys with a longer face and nose, or thicker upper and lower lips are considered more attractive than others while girls with a less curved middle face contour are considered more attractive than others. Associated facial landmarks for these features are primary focus areas for reconstructive surgery for CL ± P. This study identified anthropometric measures of facial features of Hispanic/Latino White children that are pertinent to CL ± P and which correlate with the panel attractiveness ratings.
Subject-specific and pose-oriented facial features for face recognition across poses.
Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping
2012-10-01
Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.
Dermatoscopic features of cutaneous non-facial non-acral lentiginous growth pattern melanomas
Keir, Jeff
2014-01-01
Background: The dermatoscopic features of facial lentigo maligna (LM), facial lentigo maligna melanoma (LMM) and acral lentiginous melanoma (ALM) have been well described. This is the first description of the dermatoscopic appearance of a clinical series of cutaneous non-facial non-acral lentiginous growth pattern melanomas. Objective: To describe the dermatoscopic features of a series of cutaneous non-facial non-acral lentiginous growth pattern melanomas in an Australian skin cancer practice. Method: Single observer retrospective analysis of dermatoscopic images of a one-year series of cutaneous non-facial, non-acral melanomas reported as having a lentiginous growth pattern detected in an open access primary care skin cancer clinic in Australia. Lesions were scored for presence of classical criteria for facial LM; modified pattern analysis (“Chaos and Clues”) criteria; and the presence of two novel criteria: a lentigo-like pigment pattern lacking a lentigo-like border, and large polygons. Results: 20 melanomas occurring in 14 female and 6 male patients were included. Average patient age was 64 years (range: 44–83). Lesion distribution was: trunk 35%; upper limb 40%; and lower limb 25%. The incidences of criteria identified were: asymmetry of color or pattern (100%); lentigo-like pigment pattern lacking a lentigo-like border (90%); asymmetrically pigmented follicular openings (APFO’s) (70%); grey blue structures (70%); large polygons (45%); eccentric structureless area (15%); bright white lines (5%). 20% of the lesions had only the novel criteria and/or APFO’s. Limitations: Single observer, single center retrospective study. Conclusions: Cutaneous non-facial non-acral melanomas with a lentiginous growth pattern may have none or very few traditional criteria for the diagnosis of melanoma. Criteria that are logically expected in lesions with a lentiginous growth pattern (lentigo-like pigment pattern lacking a lentigo-like border, APFO’s) and the novel criterion of large polygons may be useful in increasing sensitivity and specificity of diagnosis of these lesions. Further study is required to establish the significance of these observations. PMID:24520520
Dermatoscopic features of cutaneous non-facial non-acral lentiginous growth pattern melanomas.
Keir, Jeff
2014-01-01
The dermatoscopic features of facial lentigo maligna (LM), facial lentigo maligna melanoma (LMM) and acral lentiginous melanoma (ALM) have been well described. This is the first description of the dermatoscopic appearance of a clinical series of cutaneous non-facial non-acral lentiginous growth pattern melanomas. To describe the dermatoscopic features of a series of cutaneous non-facial non-acral lentiginous growth pattern melanomas in an Australian skin cancer practice. Single observer retrospective analysis of dermatoscopic images of a one-year series of cutaneous non-facial, non-acral melanomas reported as having a lentiginous growth pattern detected in an open access primary care skin cancer clinic in Australia. Lesions were scored for presence of classical criteria for facial LM; modified pattern analysis ("Chaos and Clues") criteria; and the presence of two novel criteria: a lentigo-like pigment pattern lacking a lentigo-like border, and large polygons. 20 melanomas occurring in 14 female and 6 male patients were included. Average patient age was 64 years (range: 44-83). Lesion distribution was: trunk 35%; upper limb 40%; and lower limb 25%. The incidences of criteria identified were: asymmetry of color or pattern (100%); lentigo-like pigment pattern lacking a lentigo-like border (90%); asymmetrically pigmented follicular openings (APFO's) (70%); grey blue structures (70%); large polygons (45%); eccentric structureless area (15%); bright white lines (5%). 20% of the lesions had only the novel criteria and/or APFO's. Single observer, single center retrospective study. Cutaneous non-facial non-acral melanomas with a lentiginous growth pattern may have none or very few traditional criteria for the diagnosis of melanoma. Criteria that are logically expected in lesions with a lentiginous growth pattern (lentigo-like pigment pattern lacking a lentigo-like border, APFO's) and the novel criterion of large polygons may be useful in increasing sensitivity and specificity of diagnosis of these lesions. Further study is required to establish the significance of these observations.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
The facial nerve: anatomy and associated disorders for oral health professionals.
Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir
2018-04-01
The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.
NASA Astrophysics Data System (ADS)
Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.
2018-03-01
The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Contributions of feature shapes and surface cues to the recognition of facial expressions.
Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J
2016-10-01
Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tsotsi, Stella; Kosmidis, Mary H; Bozikas, Vasilis P
2017-08-01
In schizophrenia, impaired facial affect recognition (FAR) has been associated with patients' overall social functioning. Interventions targeting attention or FAR per se have invariably yielded improved FAR performance in these patients. Here, we compared the effects of two interventions, one targeting FAR and one targeting attention-to-facial-features, with treatment-as-usual on patients' FAR performance. Thirty-nine outpatients with schizophrenia were randomly assigned to one of three groups: FAR intervention (training to recognize emotional information, conveyed by changes in facial features), attention-to-facial-features intervention (training to detect changes in facial features), and treatment-as-usual. Also, 24 healthy controls, matched for age and education, were assigned to one of the two interventions. Two FAR measurements, baseline and post-intervention, were conducted using an original experimental procedure with alternative sets of stimuli. We found improved FAR performance following the intervention targeting FAR in comparison to the other patient groups, which in fact was comparable to the pre-intervention performance of healthy controls in the corresponding intervention group. This improvement was more pronounced in recognizing fear. Our findings suggest that compared to interventions targeting attention, and treatment-as-usual, training programs targeting FAR can be more effective in improving FAR in patients with schizophrenia, particularly assisting them in perceiving threat-related information more accurately. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Effects of face feature and contour crowding in facial expression adaptation.
Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong
2014-12-01
Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.
ERIC Educational Resources Information Center
Rutherford, M. D.; McIntosh, Daniel N.
2007-01-01
When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…
Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.
He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan
2009-07-01
Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.
Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A.; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge; Villamil-Ramírez, Hugo; Hunemeier, Tábita; Ramallo, Virginia; Silva de Cerqueira, Caio C.; Hurtado, Malena; Villegas, Valeria; Granja, Vanessa; Gallo, Carla; Poletti, Giovanni; Schuler-Faccini, Lavinia; Salzano, Francisco M.; Bortolini, Maria-Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Bedoya, Gabriel; Gonzalez-José, Rolando; Headon, Denis; López-Otín, Carlos; Tobin, Desmond J.; Balding, David; Ruiz-Linares, Andrés
2016-01-01
We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10−8 to 3 × 10−119), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified locus influencing hair shape includes a Q30R substitution in the Protease Serine S1 family member 53 (PRSS53). We demonstrate that this enzyme is highly expressed in the hair follicle, especially the inner root sheath, and that the Q30R substitution affects enzyme processing and secretion. The genome regions associated with hair features are enriched for signals of selection, consistent with proposals regarding the evolution of human hair. PMID:26926045
Facial expression identification using 3D geometric features from Microsoft Kinect device
NASA Astrophysics Data System (ADS)
Han, Dongxu; Al Jawad, Naseer; Du, Hongbo
2016-05-01
Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.
Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki
2012-01-01
A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks.
Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki
2012-01-01
Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks. PMID:23185595
Sutural growth restriction and modern human facial evolution: an experimental study in a pig model
Holton, Nathan E; Franciscus, Robert G; Nieves, Mary Ann; Marshall, Steven D; Reimer, Steven B; Southard, Thomas E; Keller, John C; Maddux, Scott D
2010-01-01
Facial size reduction and facial retraction are key features that distinguish modern humans from archaic Homo. In order to more fully understand the emergence of modern human craniofacial form, it is necessary to understand the underlying evolutionary basis for these defining characteristics. Although it is well established that the cranial base exerts considerable influence on the evolutionary and ontogenetic development of facial form, less emphasis has been placed on developmental factors intrinsic to the facial skeleton proper. The present analysis was designed to assess anteroposterior facial reduction in a pig model and to examine the potential role that this dynamic has played in the evolution of modern human facial form. Ten female sibship cohorts, each consisting of three individuals, were allocated to one of three groups. In the experimental group (n = 10), microplates were affixed bilaterally across the zygomaticomaxillary and frontonasomaxillary sutures at 2 months of age. The sham group (n = 10) received only screw implantation and the controls (n = 10) underwent no surgery. Following 4 months of post-surgical growth, we assessed variation in facial form using linear measurements and principal components analysis of Procrustes scaled landmarks. There were no differences between the control and sham groups; however, the experimental group exhibited a highly significant reduction in facial projection and overall size. These changes were associated with significant differences in the infraorbital region of the experimental group including the presence of an infraorbital depression and an inferiorly and coronally oriented infraorbital plane in contrast to a flat, superiorly and sagittally infraorbital plane in the control and sham groups. These altered configurations are markedly similar to important additional facial features that differentiate modern humans from archaic Homo, and suggest that facial length restriction via rigid plate fixation is a potentially useful model to assess the developmental factors that underlie changing patterns in craniofacial form associated with the emergence of modern humans. PMID:19929910
Image-Based 3D Face Modeling System
NASA Astrophysics Data System (ADS)
Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir
2005-12-01
This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.
ERIC Educational Resources Information Center
Mondloch, Catherine J.; Dobson, Kate S.; Parsons, Julie; Maurer, Daphne
2004-01-01
Children are nearly as sensitive as adults to some cues to facial identity (e.g., differences in the shape of internal features and the external contour), but children are much less sensitive to small differences in the spacing of facial features. To identify factors that contribute to this pattern, we compared 8-year-olds' sensitivity to spacing…
Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing
2017-01-01
To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.
The review and results of different methods for facial recognition
NASA Astrophysics Data System (ADS)
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
The facial skeleton of the chimpanzee-human last common ancestor
Cobb, Samuel N
2008-01-01
This review uses the current morphological evidence to evaluate the facial morphology of the hypothetical last common ancestor (LCA) of the chimpanzee/bonobo (panin) and human (hominin) lineages. Some of the problems involved in reconstructing ancestral morphologies so close to the formation of a lineage are discussed. These include the prevalence of homoplasy and poor phylogenetic resolution due to a lack of defining derived features. Consequently the list of hypothetical features expected in the face of the LCA is very limited beyond its hypothesized similarity to extant Pan. It is not possible to determine with any confidence whether the facial morphology of any of the current candidate LCA taxa (Ardipithecus kadabba, Ardipithecus ramidus, Orrorin tugenensis and Sahelanthropus tchadensis) is representative of the LCA, or a stem hominin, or a stem panin or, in some cases, a hominid predating the emergence of the hominin lineage. The major evolutionary trends in the hominin lineage subsequent to the LCA are discussed in relation to the dental arcade and dentition, subnasal morphology and the size, position and prognathism of the facial skeleton. PMID:18380866
Enhanced facial recognition for thermal imagery using polarimetric imaging.
Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W
2014-07-01
We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.
Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven
2016-08-01
Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).
Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity
Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo
2016-01-01
In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
2016-10-05
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
Down syndrome detection from facial photographs using machine learning techniques
NASA Astrophysics Data System (ADS)
Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George
2013-02-01
Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.
Facial expression recognition based on improved local ternary pattern and stacked auto-encoder
NASA Astrophysics Data System (ADS)
Wu, Yao; Qiu, Weigen
2017-08-01
In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.
Marečková, Klára; Weinbrand, Zohar; Chakravarty, M Mallar; Lawrence, Claire; Aleong, Rosanne; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš
2011-11-01
Sex identification of a face is essential for social cognition. Still, perceptual cues indicating the sex of a face, and mechanisms underlying their development, remain poorly understood. Previously, our group described objective age- and sex-related differences in faces of healthy male and female adolescents (12-18 years of age), as derived from magnetic resonance images (MRIs) of the adolescents' heads. In this study, we presented these adolescent faces to 60 female raters to determine which facial features most reliably predicted subjective sex identification. Identification accuracy correlated highly with specific MRI-derived facial features (e.g. broader forehead, chin, jaw, and nose). Facial features that most reliably cued male identity were associated with plasma levels of testosterone (above and beyond age). Perceptible sex differences in face shape are thus associated with specific facial features whose emergence may be, in part, driven by testosterone. Copyright © 2011 Elsevier Inc. All rights reserved.
Factors contributing to the adaptation aftereffects of facial expression.
Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S
2008-01-29
Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.
Objective grading of facial paralysis using Local Binary Patterns in video processing.
He, Shu; Soraghan, John J; O'Reilly, Brian F
2008-01-01
This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.
Bora, Emre; Velakoulis, Dennis; Walterfang, Mark
2016-07-01
Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.
Peschard, Virginie; Philippot, Pierre; Joassin, Frédéric; Rossignol, Mandy
2013-04-01
Social anxiety has been characterized by an attentional bias towards threatening faces. Electrophysiological studies have demonstrated modulations of cognitive processing from 100 ms after stimulus presentation. However, the impact of the stimulus features and task instructions on facial processing remains unclear. Event-related potentials were recorded while high and low socially anxious individuals performed an adapted Stroop paradigm that included a colour-naming task with non-emotional stimuli, an emotion-naming task (the explicit task) and a colour-naming task (the implicit task) on happy, angry and neutral faces. Whereas the impact of task factors was examined by contrasting an explicit and an implicit emotional task, the effects of perceptual changes on facial processing were explored by including upright and inverted faces. The findings showed an enhanced P1 in social anxiety during the three tasks, without a moderating effect of the type of task or stimulus. These results suggest a global modulation of attentional processing in performance situations. Copyright © 2013 Elsevier B.V. All rights reserved.
Metric and morphological assessment of facial features: a study on three European populations.
Ritz-Timme, S; Gabriel, P; Tutkuviene, J; Poppa, P; Obertová, Z; Gibelli, D; De Angelis, D; Ratnayake, M; Rizgeliene, R; Barkus, A; Cattaneo, C
2011-04-15
Identification from video surveillance systems is becoming more and more frequent in the forensic practice. In this field, different techniques have been improved such as height estimation and gait analysis. However, the most natural approach for identifying a person in everyday life is based on facial characteristics. Scientifically, faces can be described using morphological and metric assessment of facial features. The morphological approach is largely affected by the subjective opinion of the observer, which can be mitigated by the application of descriptive atlases. In addition, this approach requires one to investigate which are the most common and rare facial characteristics in different populations. For the metric approach further studies are necessary in order to point out possible metric differences within and between different populations. The acquisition of statistically adequate population data may provide useful information for the reconstruction of biological profiles of unidentified individuals, particularly concerning ethnic affiliation, and possibly also for personal identification. This study presents the results of the morphological and metric assessment of the head and face of 900 male subjects between 20 and 31 years from Italy, Germany and Lithuania. The evaluation of the morphological traits was performed using the DMV atlas with 43 pre-defined facial characteristics. The frequencies of the types of facial features were calculated for each population in order to establish the rarest characteristics which may be used for the purpose of a biological profile and consequently for personal identification. Metric analysis performed in vivo included 24 absolute measurements and 24 indices of the head and face, including body height and body weight. The comparison of the frequencies of morphological facial features showed many similarities between the samples from Germany, Italy and Lithuania. However, several characteristics were rare or significantly more or less common in one population compared to the other two. On the other hand, all measurements and indices, except for labial width and intercanthal-mouth index showed significant differences between the three populations. As far as comparisons with other samples are concerned, the three European Caucasian samples differed from North American Caucasian, African and Asian groups as concerns the frequency of the morphological traits and the mean values of the metric analysis. The metric and morphological data collected from three European populations may be useful for forensic purposes in the construction of biological profiles and in screening for personal identification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Tender, Jennifer A F; Ferreira, Carlos R
2018-04-13
Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls.
Eker, Hatice Koçak; Derinkuyu, Betül Emine; Ünal, Sevim; Masliah-Planchon, Julien; Drunat, Séverine; Verloes, Alain
2014-01-01
Baraitser-Winter syndrome (BRWS) is a rare condition affecting the development of the brain and the face. The most common characteristics are unusual facial appearance including hypertelorism and ptosis, ocular colobomas, hearing loss, impaired neuronal migration and intellectual disability. BRWS is caused by mutations in the ACTB and ACTG1 genes. Cerebro-fronto-facial syndrome (CFFS) is a clinically heterogeneous condition with distinct facial dysmorphism, and brain abnormalities. Three subtypes are identified. We report a female infant with striking facial features and brain anomalies (included polymicrogyria) that fit into the spectrum of the CFFS type 3 (CFFS3). She also had minor anomalies on her hands and feet, heart and kidney malformations, and recurrent infections. DNA investigations revealed c.586C>T mutation (p.Arg196Cys) in ACTB. This mutation places this patient in the spectrum of BRWS. The same mutation has been detected in a polymicrogyric patient reported previously in literature. We expand the malformation spectrum of BRWS/CFFS3, and present preliminary findings for phenotype-genotype correlation in this spectrum. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Plomp, Raul G; Versnel, Sarah L; van Lieshout, Manouk J S; Poublon, Rene M L; Mathijssen, Irene M J
2013-08-01
This study aimed to determine which facial features and functions need more attention during surgical treatment of Treacher Collins syndrome (TCS) in the long term. A cross-sectional cohort study was conducted to compare 23 TCS patients with 206 controls (all≥18 years) regarding satisfaction with their face. The adjusted Body Cathexis Scale was used to determine satisfaction with the appearance of the different facial features and functions. Desire for further treatment of these items was questioned. For each patient an overview was made of all facial operations performed, the affected facial features and the objective severity of the facial deformities. Patients were least satisfied with the appearance of the ears, facial profile and eyelids and with the functions hearing and nasal patency (P<0.001). Residual deformity of the reconstructed facial areas remained a problem in mainly the orbital area. The desire for further treatment and dissatisfaction was high in the operated patients, predominantly for eyelid reconstructions. Another significant wish was for improvement of hearing. In patients with TCS, functional deficits of the face are shown to be as important as the facial appearance. Particularly nasal patency and hearing are frequently impaired and require routine screening and treatment from intake onwards. Furthermore, correction of ear deformities and midface hypoplasia should be offered and performed more frequently. Residual deformity and dissatisfaction remains a problem, especially in reconstructed eyelids. II. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders
Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini
2008-01-01
Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Spoofing detection on facial images recognition using LBP and GLCM combination
NASA Astrophysics Data System (ADS)
Sthevanie, F.; Ramadhani, K. N.
2018-03-01
The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.
... Children who have Stickler syndrome often have distinctive facial features — prominent eyes, a small nose with a scooped ... develop ear infections than are children with normal facial features. Deafness. Hearing loss may worsen with time and ...
Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.
Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz
2015-04-01
Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. (c) 2015 APA, all rights reserved).
Facial contrast is a cue for perceiving health from the face.
Russell, Richard; Porcheron, Aurélie; Sweda, Jennifer R; Jones, Alex L; Mauger, Emmanuelle; Morizot, Frederique
2016-09-01
How healthy someone appears has important social consequences. Yet the visual cues that determine perceived health remain poorly understood. Here we report evidence that facial contrast-the luminance and color contrast between internal facial features and the surrounding skin-is a cue for the perception of health from the face. Facial contrast was measured from a large sample of Caucasian female faces, and was found to predict ratings of perceived health. Most aspects of facial contrast were positively related to perceived health, meaning that faces with higher facial contrast appeared healthier. In 2 subsequent experiments, we manipulated facial contrast and found that participants perceived faces with increased facial contrast as appearing healthier than faces with decreased facial contrast. These results support the idea that facial contrast is a cue for perceived health. This finding adds to the growing knowledge about perceived health from the face, and helps to ground our understanding of perceived health in terms of lower-level perceptual features such as contrast. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick
2013-01-01
Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.
Bandini, Andrea; Green, Jordan R; Wang, Jun; Campbell, Thomas F; Zinman, Lorne; Yunusova, Yana
2018-05-17
The goals of this study were to (a) classify speech movements of patients with amyotrophic lateral sclerosis (ALS) in presymptomatic and symptomatic phases of bulbar function decline relying solely on kinematic features of lips and jaw and (b) identify the most important measures that detect the transition between early and late bulbar changes. One hundred ninety-two recordings obtained from 64 patients with ALS were considered for the analysis. Feature selection and classification algorithms were used to analyze lip and jaw movements recorded with Optotrak Certus (Northern Digital Inc.) during a sentence task. A feature set, which included 35 measures of movement range, velocity, acceleration, jerk, and area measures of lips and jaw, was used to classify sessions according to the speaking rate into presymptomatic (> 160 words per minute) and symptomatic (< 160 words per minute) groups. Presymptomatic and symptomatic phases of bulbar decline were distinguished with high accuracy (87%), relying only on lip and jaw movements. The best features that allowed detecting the differences between early and later bulbar stages included cumulative path of lower lip and jaw, peak values of velocity, acceleration, and jerk of lower lip and jaw. The results established a relationship between facial kinematics and bulbar function decline in ALS. Considering that facial movements can be recorded by means of novel inexpensive and easy-to-use, video-based methods, this work supports the development of an automatic system for facial movement analysis to help clinicians in tracking the disease progression in ALS.
Birgfeld, Craig B; Heike, Carrie L; Saltzman, Babette S; Leroux, Brian G; Evans, Kelly N; Luquetti, Daniela V
2016-03-31
Craniofacial microsomia is a common congenital condition for which children receive longitudinal, multidisciplinary team care. However, little is known about the etiology of craniofacial microsomia and few outcome studies have been published. In order to facilitate large, multicenter studies in craniofacial microsomia, we assessed the reliability of phenotypic classification based on photographs by comparison with direct physical examination. Thirty-nine children with craniofacial microsomia underwent a physical examination and photographs according to a standardized protocol. Three clinicians completed ratings during the physical examination and, at least a month later, using respective photographs for each participant. We used descriptive statistics for participant characteristics and intraclass correlation coefficients (ICCs) to assess reliability. The agreement between ratings on photographs and physical exam was greater than 80 % for all 15 categories included in the analysis. The ICC estimates were higher than 0.6 for most features. Features with the highest ICC included: presence of epibulbar dermoids, ear abnormalities, and colobomas (ICC 0.85, 0.81, and 0.80, respectively). Orbital size, presence of pits, tongue abnormalities, and strabismus had the lowest ICC, values (0.17 or less). There was not a strong tendency for either type of rating, physical exam or photograph, to be more likely to designate a feature as abnormal. The agreement between photographs and physical exam regarding the presence of a prior surgery was greater than 90 % for most features. Our results suggest that categorization of facial phenotype in children with CFM based on photographs is reliable relative to physical examination for most facial features.
Selective attention to a facial feature with and without facial context: an ERP-study.
Wijers, A A; Van Besouw, N J P; Mulder, G
2002-04-01
The present experiment addressed the question whether selectively attending to a facial feature (mouth shape) would benefit from the presence of a correct facial context. Subjects attended selectively to one of two possible mouth shapes belonging to photographs of a face with a happy or sad expression, respectively. These mouths were presented randomly either in isolation, embedded in the original photos, or in an exchanged facial context. The ERP effect of attending mouth shape was a lateral posterior negativity, anterior positivity with an onset latency of 160-200 ms; this effect was completely unaffected by the type of facial context. When the mouth shape and the facial context conflicted, this resulted in a medial parieto-occipital positivity with an onset latency of 180 ms, independent of the relevance of the mouth shape. Finally, there was a late (onset at approx. 400 ms) expression (happy vs. sad) effect, which was strongly lateralized to the right posterior hemisphere and was most prominent for attended stimuli in the correct facial context. For the isolated mouth stimuli, a similarly distributed expression effect was observed at an earlier latency range (180-240 ms). These data suggest the existence of separate, independent and neuroanatomically segregated processors engaged in the selective processing of facial features and the detection of contextual congruence and emotional expression of face stimuli. The data do not support that early selective attention processes benefit from top-down constraints provided by the correct facial context.
Tender, Jennifer A.F.; Ferreira, Carlos R.
2018-01-01
BACKGROUND: Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. OBJECTIVE: To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. METHODS: We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. CONCLUSION: The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls. PMID:29682451
Facial expression recognition based on improved deep belief networks
NASA Astrophysics Data System (ADS)
Wu, Yao; Qiu, Weigen
2017-08-01
In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.
Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.
Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko
2008-01-01
The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.
Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao
2004-02-01
Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (P<.001) statistical analysis suggest that the fetuses develop wider chins and broader facial features in later weeks. The serial 3D sonography and mandibular size monogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.
Parks, Connie L; Monson, Keith L
2018-01-01
This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.
Consensus on Changing Trends, Attitudes, and Concepts of Asian Beauty.
Liew, Steven; Wu, Woffles T L; Chan, Henry H; Ho, Wilson W S; Kim, Hee-Jin; Goodman, Greg J; Peng, Peter H L; Rogers, John D
2016-04-01
Asians increasingly seek non-surgical facial esthetic treatments, especially at younger ages. Published recommendations and clinical evidence mostly reference Western populations, but Asians differ from them in terms of attitudes to beauty, structural facial anatomy, and signs and rates of aging. A thorough knowledge of the key esthetic concerns and requirements for the Asian face is required to strategize appropriate facial esthetic treatments with botulinum toxin and hyaluronic acid (HA) fillers. The Asian Facial Aesthetics Expert Consensus Group met to develop consensus statements on concepts of facial beauty, key esthetic concerns, facial anatomy, and aging in Southeastern and Eastern Asians, as a prelude to developing consensus opinions on the cosmetic facial use of botulinum toxin and HA fillers in these populations. Beautiful and esthetically attractive people of all races share similarities in appearance while retaining distinct ethnic features. Asians between the third and sixth decades age well compared with age-matched Caucasians. Younger Asians' increasing requests for injectable treatments to improve facial shape and three-dimensionality often reflect a desire to correct underlying facial structural deficiencies or weaknesses that detract from ideals of facial beauty. Facial esthetic treatments in Asians are not aimed at Westernization, but rather the optimization of intrinsic Asian ethnic features, or correction of specific underlying structural features that are perceived as deficiencies. Thus, overall facial attractiveness is enhanced while retaining esthetic characteristics of Asian ethnicity. Because Asian patients age differently than Western patients, different management and treatment planning strategies are utilized. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to Table of Contents or the online Instructions to Authors www.springer.com/00266.
Mutual information-based facial expression recognition
NASA Astrophysics Data System (ADS)
Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah
2013-12-01
This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.
Poirier, Frédéric J A M; Faubert, Jocelyn
2012-06-22
Facial expressions are important for human communications. Face perception studies often measure the impact of major degradation (e.g., noise, inversion, short presentations, masking, alterations) on natural expression recognition performance. Here, we introduce a novel face perception technique using rich and undegraded stimuli. Participants modified faces to create optimal representations of given expressions. Using sliders, participants adjusted 53 face components (including 37 dynamic) including head, eye, eyebrows, mouth, and nose shape and position. Data was collected from six participants and 10 conditions (six emotions + pain + gender + neutral). Some expressions had unique features (e.g., frown for anger, upward-curved mouth for happiness), whereas others had shared features (e.g., open eyes and mouth for surprise and fear). Happiness was different from other emotions. Surprise was different from other emotions except fear. Weighted sum morphing provides acceptable stimuli for gender-neutral and dynamic stimuli. Many features were correlated, including (1) head size with internal feature sizes as related to gender, (2) internal feature scaling, and (3) eyebrow height and eye openness as related to surprise and fear. These findings demonstrate the method's validity for measuring the optimal facial expressions, which we argue is a more direct measure of their internal representations.
LBP and SIFT based facial expression recognition
NASA Astrophysics Data System (ADS)
Sumer, Omer; Gunes, Ece O.
2015-02-01
This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.
Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2015-12-01
In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.
The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.
Sacco, Donald F; Hugenberg, Kurt
2009-02-01
The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki
This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.
Genetics Home Reference: ADNP syndrome
... disorder, which is characterized by impaired communication and social interaction. Affected individuals also have distinctive facial features and ... spectrum disorder, including repetitive behaviors and difficulty with social interactions. ADNP syndrome is also associated with mood disorders ...
Facial feature tracking: a psychophysiological measure to assess exercise intensity?
Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G
2018-04-01
The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P < 0.005; UF 1961 ± 1779mm, P = 0.002; LF 1608 ± 1404mm, P = 0.002; HM 849 ± 642mm, P < 0.001). UF movement was greater than LF movement at all exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P < 0.001). Significant medium to large non-linear relationships were found between facial movement and power output (r 2 = 0.24-0.31), HR (r 2 = 0.26-0.33), [La - ] (r 2 = 0.33-0.44) and RPE (r 2 = 0.38-0.45). The findings demonstrate the potential utility of facial feature tracking as a non-invasive, psychophysiological measure to potentially assess exercise intensity.
Association of Frontal and Lateral Facial Attractiveness.
Gu, Jeffrey T; Avilla, David; Devcic, Zlatko; Karimi, Koohyar; Wong, Brian J F
2018-01-01
Despite the large number of studies focused on defining frontal or lateral facial attractiveness, no reports have examined whether a significant association between frontal and lateral facial attractiveness exists. To examine the association between frontal and lateral facial attractiveness and to identify anatomical features that may influence discordance between frontal and lateral facial beauty. Paired frontal and lateral facial synthetic images of 240 white women (age range, 18-25 years) were evaluated from September 30, 2004, to September 29, 2008, using an internet-based focus group (n = 600) on an attractiveness Likert scale of 1 to 10, with 1 being least attractive and 10 being most attractive. Data analysis was performed from December 6, 2016, to March 30, 2017. The association between frontal and lateral attractiveness scores was determined using linear regression. Outliers were defined as data outside the 95% individual prediction interval. To identify features that contribute to score discordance between frontal and lateral attractiveness scores, each of these image pairs were scrutinized by an evaluator panel for facial features that were present in the frontal or lateral projections and absent in the other respective facial projections. Attractiveness scores obtained from internet-based focus groups. For the 240 white women studied (mean [SD] age, 21.4 [2.2] years), attractiveness scores ranged from 3.4 to 9.5 for frontal images and 3.3 to 9.4 for lateral images. The mean (SD) frontal attractiveness score was 6.9 (1.4), whereas the mean (SD) lateral attractiveness score was 6.4 (1.3). Simple linear regression of frontal and lateral attractiveness scores resulted in a coefficient of determination of r2 = 0.749. Eight outlier pairs were identified and analyzed by panel evaluation. Panel evaluation revealed no clinically applicable association between frontal and lateral images among outliers; however, contributory facial features were suggested. Thin upper lip, convex nose, and blunt cervicomental angle were suggested by evaluators as facial characteristics that contributed to outlier frontal or lateral attractiveness scores. This study identified a strong linear association between frontal and lateral facial attractiveness. Furthermore, specific facial landmarks responsible for the discordance between frontal and lateral facial attractiveness scores were suggested. Additional studies are necessary to determine whether correction of these landmarks may increase facial harmony and attractiveness. NA.
Tang, Dorothy Y Y; Liu, Amy C Y; Lui, Simon S Y; Lam, Bess Y H; Siu, Bonnie W M; Lee, Tatia M C; Cheung, Eric F C
2016-02-28
Impairment in facial emotion perception is believed to be associated with aggression. Schizophrenia patients with antisocial features are more impaired in facial emotion perception than their counterparts without these features. However, previous studies did not define the comorbidity of antisocial personality disorder (ASPD) using stringent criteria. We recruited 30 participants with dual diagnoses of ASPD and schizophrenia, 30 participants with schizophrenia and 30 controls. We employed the Facial Emotional Recognition paradigm to measure facial emotion perception, and administered a battery of neurocognitive tests. The Life History of Aggression scale was used. ANOVAs and ANCOVAs were conducted to examine group differences in facial emotion perception, and control for the effect of other neurocognitive dysfunctions on facial emotion perception. Correlational analyses were conducted to examine the association between facial emotion perception and aggression. Patients with dual diagnoses performed worst in facial emotion perception among the three groups. The group differences in facial emotion perception remained significant, even after other neurocognitive impairments were controlled for. Severity of aggression was correlated with impairment in perceiving negative-valenced facial emotions in patients with dual diagnoses. Our findings support the presence of facial emotion perception impairment and its association with aggression in schizophrenia patients with comorbid ASPD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The face is not an empty canvas: how facial expressions interact with facial appearance.
Hess, Ursula; Adams, Reginald B; Kleck, Robert E
2009-12-12
Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.
Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios
2013-08-01
Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.
Chiropractic management of Bell palsy with low level laser and manipulation: a case report
Rubis, Lisa M.
2013-01-01
Objective The purpose of this case report is to describe chiropractic management including the use of cold laser and chiropractic manipulation in the treatment of a patient with Bell palsy. Clinical features A 40-year-old male patient had a 10-day history of facial paralysis on his left side, including the inability to close his left eye, which also had tearing and a burning sensation. The patient had trouble lifting his left lip and complained of drooling while brushing his teeth. There was no previous history of similar symptoms or a recent infection. Prior treatment had included oral steroids. Intervention and outcome The patient was treated with low-level laser therapy and chiropractic manipulation 2 times in 4 days. The laser was applied along the course of the facial nerve for 30 seconds at each point and for 1 minute at the stylomastoid foramen. The laser used was a GaAs class 4 laser with a wavelength of 910 nm. The patient perceived a 70% to 80% improvement of facial movement after the first treatment. After the second treatment, the patient reported full control of his facial movements. Conclusion A patient with acute facial paralysis appeared to have complete resolution of his symptoms following the application of low-level laser therapy and chiropractic manipulation. PMID:24396332
Facial emotion recognition and borderline personality pathology.
Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio
2017-09-01
The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
3D facial expression recognition using maximum relevance minimum redundancy geometrical features
NASA Astrophysics Data System (ADS)
Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce
2012-12-01
In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.
Nagarajan, R; Hariharan, M; Satiyan, M
2012-08-01
Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
Research on driver fatigue detection
NASA Astrophysics Data System (ADS)
Zhang, Ting; Chen, Zhong; Ouyang, Chao
2018-03-01
Driver fatigue is one of the main causes of frequent traffic accidents. In this case, driver fatigue detection system has very important significance in avoiding traffic accidents. This paper presents a real-time method based on fusion of multiple facial features, including eye closure, yawn and head movement. The eye state is classified as being open or closed by a linear SVM classifier trained using HOG features of the detected eye. The mouth state is determined according to the width-height ratio of the mouth. The head movement is detected by head pitch angle calculated by facial landmark. The driver's fatigue state can be reasoned by the model trained by above features. According to experimental results, drive fatigue detection obtains an excellent performance. It indicates that the developed method is valuable for the application of avoiding traffic accidents caused by driver's fatigue.
Emotion-independent face recognition
NASA Astrophysics Data System (ADS)
De Silva, Liyanage C.; Esther, Kho G. P.
2000-12-01
Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.
The aging African-American face.
Brissett, Anthony E; Naylor, Michelle C
2010-05-01
With the desire to create a more youthful appearance, patients of all races and ethnicities are increasingly seeking nonsurgical and surgical rejuvenation. In particular, facial rejuvenation procedures have grown significantly within the African-American population. This increase has resulted in a paradigm shift in facial plastic surgery as one considers rejuvenation procedures in those of African descent, as the aging process of various racial groups differs from traditional models. The purpose of this article is to draw attention to the facial features unique to those of African descent and the role these features play in the aging process, taking care to highlight the differences from traditional models of facial aging. In addition, this article will briefly describe the nonsurgical and surgical options for facial rejuvenation taking into consideration the previously discussed facial aging differences and postoperative considerations. Thieme Medical Publishers.
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
2018-01-01
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Xu, Jiatuo; Wu, Hongjin; Lu, Luming; Tu, Liping; Zhang, Zhifeng; Chen, Xiao
2012-12-01
This paper is aimed to observe the difference of facial color of people with different health status by spectral photometric color measuring technique according to the theory of facial color diagnosis in Internal Classic. We gathered the facial color information about the health status of persons in healthy group (183), sub-healthy group (287) and disease group (370) respectively. The information included L, a, b, C values and reflection of different wavelengths in 400-700nm with CM-2600D spectral photometric color measuring instrument on 8 points. The results indicated that overall complexion color values of the people in the three groups were significantly different. The persons in the disease group looked deep dark in features. The people in the sub-healthy group looked pale in features. The loci L, a, b, C values were with varying degrees of significant differences (P < 0.05) at 6 points among the groups, and the central position of the face in all the groups was the position with most significant differences. Comparing the facial color information at the same point of the people in the three groups, we obtained each group's diagnostic special point. There existed diagnostic values in distinguishing disease status and various status of health in some degree by spectral photometric color measuring technique. The present method provides a prosperous quantitative basis for Chinese medical inspection of the complexion diagnosis.
Influence of gravity upon some facial signs.
Flament, F; Bazin, R; Piot, B
2015-06-01
Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Smirnova, Z. N.
2015-05-01
Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.
Facial expression recognition based on weber local descriptor and sparse representation
NASA Astrophysics Data System (ADS)
Ouyang, Yan
2018-03-01
Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.
Pomahac, Bohdan; Aflaki, Pejman; Nelson, Charles; Balas, Benjamin
2010-05-01
Partial facial allotransplantation is an emerging option in reconstruction of central facial defects, providing function and aesthetic appearance. Ethical debate partly stems from uncertainty surrounding identity aspects of the procedure. There is no objective evidence regarding the effect of donors' transplanted facial structures on appearance change of the recipients and its influence on facial recognition of donors and recipients. Full-face frontal view color photographs of 100 volunteers were taken at a distance of 150 cm with a digital camera (Nikon/DX80). Photographs were taken in front of a blue background, and with a neutral facial expression. Using image-editing software (Adobe-Photoshop-CS3), central facial transplantation was performed between participants. Twenty observers performed a familiar 'facial recognition task', to identify 40 post-transplant composite faces presented individually on the screen at a viewing distance of 60 cm, with an exposure time of 5s. Each composite face comprised of a familiar and an unfamiliar face to the observers. Trials were done with and without external facial features (head contour, hair and ears). Two variables were defined: 'Appearance Transfer' refers to transfer of donor's appearance to the recipient. 'Appearance Persistence' deals with the extent of recipient's appearance change post-transplantation. A t-test was run to determine if the rates of Appearance Transfer differed from Appearance Persistence. Average Appearance Transfer rate (2.6%) was significantly lower than Appearance Persistence rate (66%) (P<0.001), indicating that donor's appearance transfer to the recipient is negligible, whereas recipients will be identified the majority of the time. External facial features were important in facial recognition of recipients, evidenced by a significant rise in Appearance Persistence from 19% in the absence of external features to 66% when those features were present (P<0.01). This study may be helpful in the informed consent process of prospective recipients. It is beneficial for education of donors families and is expected to positively affect their decision to consent for facial tissue donation. Copyright (c) 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Hepatitis Diagnosis Using Facial Color Image
NASA Astrophysics Data System (ADS)
Liu, Mingjia; Guo, Zhenhua
Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Towards a new taxonomy of idiopathic orofacial pain.
Woda, Alain; Tubert-Jeannin, Stéphanie; Bouhassira, Didier; Attal, Nadine; Fleiter, Bernard; Goulet, Jean-Paul; Gremeau-Richard, Christelle; Navez, Marie Louise; Picard, Pascale; Pionchon, Paul; Albuisson, Eliane
2005-08-01
There is no current consensus on the taxonomy of the different forms of idiopathic orofacial pain (stomatodynia, atypical odontalgia, atypical facial pain, facial arthromyalgia), which are sometimes considered as separate entities and sometimes grouped together. In the present prospective multicentric study, we used a systematic approach to help to place these different painful syndromes in the general classification of chronic facial pain. This multicenter study was carried out on 245 consecutive patients presenting with chronic facial pain (>4 months duration). Each patient was seen by two experts who proposed a diagnosis, administered a 111-item questionnaire and filled out a standardized 68-item examination form. Statistical processing included univariate analysis and several forms of multidimensional analysis. Migraines (n=37), tension-type headache (n=26), post-traumatic neuralgia (n=20) and trigeminal neuralgia (n=13) tended to cluster independently. When signs and symptoms describing topographic features were not included in the list of variables, the idiopathic orofacial pain patients tended to cluster in a single group. Inside this large cluster, only stomatodynia (n=42) emerged as a distinct homogenous subgroup. In contrast, facial arthromyalgia (n=46) and an entity formed with atypical facial pain (n=25) and atypical odontalgia (n=13) could only be individualised by variables reflecting topographical characteristics. These data provide grounds for an evidence-based classification of idiopathic facial pain entities and indicate that the current sub-classification of these syndromes relies primarily on the topography of the symptoms.
High-resolution face verification using pore-scale facial features.
Li, Dong; Zhou, Huiling; Lam, Kin-Man
2015-08-01
Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.
Familiarity effects in the construction of facial-composite images using modern software systems.
Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B
2011-12-01
We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems.
Goodspeed, Kimberly; Newsom, Cassandra; Morris, Mary Ann; Powell, Craig; Evans, Patricia; Golla, Sailaja
2018-03-01
Pitt-Hopkins syndrome (PTHS) is a rare, genetic disorder caused by a molecular variant of TCF4 which is involved in embryologic neuronal differentiation. PTHS is characterized by syndromic facies, psychomotor delay, and intellectual disability. Other associated features include early-onset myopia, seizures, constipation, and hyperventilation-apneic spells. Many also meet criteria for autism spectrum disorder. Here the authors present a series of 23 PTHS patients with molecularly confirmed TCF4 variants and describe 3 unique individuals. The first carries a small deletion but does not exhibit the typical facial features nor the typical pattern of developmental delay. The second exhibits typical facial features, but has attained more advanced motor and verbal skills than other reported cases to date. The third displays typical features of PTHS, however inherited a large chromosomal duplication involving TCF4 from his unaffected father with somatic mosaicism. To the authors' knowledge, this is the first chromosomal duplication case reported to date.
Facial Age Synthesis Using Sparse Partial Least Squares (The Case of Ben Needham).
Bukar, Ali M; Ugail, Hassan
2017-09-01
Automatic facial age progression (AFAP) has been an active area of research in recent years. This is due to its numerous applications which include searching for missing. This study presents a new method of AFAP. Here, we use an active appearance model (AAM) to extract facial features from available images. An aging function is then modelled using sparse partial least squares regression (sPLS). Thereafter, the aging function is used to render new faces at different ages. To test the accuracy of our algorithm, extensive evaluation is conducted using a database of 500 face images with known ages. Furthermore, the algorithm is used to progress Ben Needham's facial image that was taken when he was 21 months old to the ages of 6, 14, and 22 years. The algorithm presented in this study could potentially be used to enhance the search for missing people worldwide. © 2017 American Academy of Forensic Sciences.
Wynn, Jonathan K.; Lee, Junghee; Horan, William P.; Green, Michael F.
2008-01-01
Schizophrenia patients show impairments in identifying facial affect; however, it is not known at what stage facial affect processing is impaired. We evaluated 3 event-related potentials (ERPs) to explore stages of facial affect processing in schizophrenia patients. Twenty-six schizophrenia patients and 27 normal controls participated. In separate blocks, subjects identified the gender of a face, the emotion of a face, or if a building had 1 or 2 stories. Three ERPs were examined: (1) P100 to examine basic visual processing, (2) N170 to examine facial feature encoding, and (3) N250 to examine affect decoding. Behavioral performance on each task was also measured. Results showed that schizophrenia patients’ P100 was comparable to the controls during all 3 identification tasks. Both patients and controls exhibited a comparable N170 that was largest during processing of faces and smallest during processing of buildings. For both groups, the N250 was largest during the emotion identification task and smallest for the building identification task. However, the patients produced a smaller N250 compared with the controls across the 3 tasks. The groups did not differ in behavioral performance in any of the 3 identification tasks. The pattern of intact P100 and N170 suggest that patients maintain basic visual processing and facial feature encoding abilities. The abnormal N250 suggests that schizophrenia patients are less efficient at decoding facial affect features. Our results imply that abnormalities in the later stage of feature decoding could potentially underlie emotion identification deficits in schizophrenia. PMID:18499704
Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language
Benitez-Quiroz, C. Fabian; Gökgöz, Kadir; Wilbur, Ronnie B.; Martinez, Aleix M.
2014-01-01
To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic–computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences – Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions – plus their polarities – positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches. PMID:24516528
Face in profile view reduces perceived facial expression intensity: an eye-tracking study.
Guo, Kun; Shaw, Heather
2015-02-01
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. Copyright © 2014 Elsevier B.V. All rights reserved.
Oral-facial-digital syndrome type IX in a patient with Dandy-Walker malformation.
Nagai, K; Nagao, M; Nagao, M; Yanai, S; Minagawa, K; Takahashi, Y; Takekoshi, Y; Ishizaka, A; Matsuzono, Y; Kobayashi, O; Itagaki, T
1998-01-01
We report a girl with oral, facial, and digital anomalies including multiple alveolar frenula, lobulated tongue with nodules, a posterior cleft palate, hypertelorism, a prominent forehead with a large anterior fontanelle, and postaxial polydactyly in both hands and the right foot, features compatible with the oral-facial-digital syndrome (OFDS). In addition, she had bilateral microphthalmia, optic disc coloboma, and retinal degeneration with partial detachment, thus establishing a diagnosis of OFDS type IX. Dandy-Walker malformation and retrobulbar cysts were observed on MRI. These additional malformations have not been reported in OFDS type IX. The frequent apnoeic spells which occurred immediately after birth were relieved after cystoperitoneal shunt implantation for hydrocephalus. Considering our case and previous reports of OFDS type IX, including two male sibs, a boy born to consanguineous parents, and three females, inheritance is probably autosomal recessive. Images PMID:9598735
Asadollahi, Reza; Strauss, Justin E; Zenker, Martin; Beuing, Oliver; Edvardson, Simon; Elpeleg, Orly; Strom, Tim M; Joset, Pascal; Niedrist, Dunja; Otte, Christine; Oneda, Beatrice; Boonsawat, Paranchai; Azzarello-Burri, Silvia; Bartholdi, Deborah; Papik, Michael; Zweier, Markus; Haas, Cordula; Ekici, Arif B; Baumer, Alessandra; Boltshauser, Eugen; Steindl, Katharina; Nothnagel, Michael; Schinzel, Albert; Stoeckli, Esther T; Rauch, Anita
2018-02-01
Acrocallosal syndrome (ACLS) is an autosomal recessive neurodevelopmental disorder caused by KIF7 defects and belongs to the heterogeneous group of ciliopathies related to Joubert syndrome (JBTS). While ACLS is characterized by macrocephaly, prominent forehead, depressed nasal bridge, and hypertelorism, facial dysmorphism has not been emphasized in JBTS cohorts with molecular diagnosis. To evaluate the specificity and etiology of ACLS craniofacial features, we performed whole exome or targeted Sanger sequencing in patients with the aforementioned overlapping craniofacial appearance but variable additional ciliopathy features followed by functional studies. We found (likely) pathogenic variants of KIF7 in 5 out of 9 families, including the original ACLS patients, and delineated 1000 to 4000-year-old Swiss founder alleles. Three of the remaining families had (likely) pathogenic variants in the JBTS gene C5orf42, and one patient had a novel de novo frameshift variant in SHH known to cause autosomal dominant holoprosencephaly. In accordance with the patients' craniofacial anomalies, we showed facial midline widening after silencing of C5orf42 in chicken embryos. We further supported the link between KIF7, SHH, and C5orf42 by demonstrating abnormal primary cilia and diminished response to a SHH agonist in fibroblasts of C5orf42-mutated patients, as well as axonal pathfinding errors in C5orf42-silenced chicken embryos similar to those observed after perturbation of Shh signaling. Our findings, therefore, suggest that beside the neurodevelopmental features, macrocephaly and facial widening are likely more general signs of disturbed SHH signaling. Nevertheless, long-term follow-up revealed that C5orf42-mutated patients showed catch-up development and fainting of facial features contrary to KIF7-mutated patients.
A new atlas for the evaluation of facial features: advantages, limits, and applicability.
Ritz-Timme, Stefanie; Gabriel, Peter; Obertovà, Zuzana; Boguslawski, Melanie; Mayer, F; Drabik, A; Poppa, Pasquale; De Angelis, Danilo; Ciaffi, Romina; Zanotti, Benedetta; Gibelli, Daniele; Cattaneo, Cristina
2011-03-01
Methods for the verification of the identity of offenders in cases involving video-surveillance images in criminal investigation events are currently under scrutiny by several forensic experts around the globe. The anthroposcopic, or morphological, approach based on facial features is the most frequently used by international forensic experts. However, a specific set of applicable features has not yet been agreed on by the experts. Furthermore, population frequencies of such features have not been recorded, and only few validation tests have been published. To combat and prevent crime in Europe, the European Commission funded an extensive research project dedicated to the optimization of methods for facial identification of persons on photographs. Within this research project, standardized photographs of 900 males between 20 and 31 years of age from Germany, Italy, and Lithuania were acquired. Based on these photographs, 43 facial features were described and evaluated in detail. These efforts led to the development of a new model of a morphologic atlas, called DMV atlas ("Düsseldorf Milan Vilnius," from the participating cities). This study is the first attempt at verifying the feasibility of this atlas as a preliminary step to personal identification by exploring the intra- and interobserver error. The analysis yielded mismatch percentages from 19% to 39%, which reflect the subjectivity of the approach and suggest caution in verifying personal identity only from the classification of facial features. Nonetheless, the use of the atlas leads to a significant improvement of consistency in the evaluation.
Kurosumi, M; Mizukoshi, K
2018-05-01
The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Neri, Iria; Raone, Beatrice; Dondi, Arianna; Misciali, Cosimo; Patrizi, Annalisa
2013-01-01
Idiopathic facial aseptic granuloma (IFAG), or pyodermite froide du visage, is a skin disease reported only in children and characterized by painless red nodules usually located on the cheeks. Its etiology is still unclear, but some authors considered the possibility that IFAG might be included in the spectrum of granulomatous rosacea (GR). The histopathological features of IFAG and GR are quite similar, showing perifolliculitis, granulomas, folliculitis, and lymphocytes and plasmacells around epithelioid histiocytes. In the present article, we discuss three cases in which an association between a facial nodule, compatible with both IFAG and GR, and recurrent chalazia make us support the hypothesis that IFAG should be considered as GR. © 2012 Wiley Periodicals, Inc.
Gunduz, Mehmet
2016-01-01
Peroxisomal disorders are a group of genetically heterogeneous metabolic diseases related to dysfunction of peroxisomes. Dysmorphic features, neurological abnormalities, and hepatic dysfunction can be presenting signs of peroxisomal disorders. Here we presented dysmorphic facial features and other clinical characteristics in two patients with PEX1 gene mutation. Follow-up periods were 3.5 years and 1 year in the patients. Case I was one-year-old girl that presented with neurodevelopmental delay, hepatomegaly, bilateral hearing loss, and visual problems. Ophthalmologic examination suggested septooptic dysplasia. Cranial magnetic resonance imaging (MRI) showed nonspecific gliosis at subcortical and periventricular deep white matter. Case II was 2.5-year-old girl referred for investigation of global developmental delay and elevated liver enzymes. Ophthalmologic examination findings were consistent with bilateral nystagmus and retinitis pigmentosa. Cranial MRI was normal. Dysmorphic facial features including broad nasal root, low set ears, downward slanting eyes, downward slanting eyebrows, and epichantal folds were common findings in two patients. Molecular genetic analysis indicated homozygous novel IVS1-2A>G mutation in Case I and homozygous p.G843D (c.2528G>A) mutation in Case II in the PEX1 gene. Clinical findings and developmental prognosis vary in PEX1 gene mutation. Kabuki-like phenotype associated with liver pathology may indicate Zellweger spectrum disorders (ZSD). PMID:27882258
Baek, Chaehwan; Paeng, Jun-Young; Lee, Janice S; Hong, Jongrak
2012-05-01
A systematic classification is needed for the diagnosis and surgical treatment of facial asymmetry. The purposes of this study were to analyze the skeletal structures of patients with facial asymmetry and to objectively classify these patients into groups according to these structural characteristics. Patients with facial asymmetry and recent computed tomographic images from 2005 through 2009 were included in this study, which was approved by the institutional review board. Linear measurements, angles, and reference planes on 3-dimensional computed tomograms were obtained, including maxillary (upper midline deviation, maxilla canting, and arch form discrepancy) and mandibular (menton deviation, gonion to midsagittal plane, ramus height, and frontal ramus inclination) measurements. All measurements were analyzed using paired t tests with Bonferroni correction followed by K-means cluster analysis using SPSS 13.0 to determine an objective classification of facial asymmetry in the enrolled patients. Kruskal-Wallis test was performed to verify differences among clustered groups. P < .05 was considered statistically significant. Forty-three patients (18 male, 25 female) were included in the study. They were classified into 4 groups based on cluster analysis. Their mean age was 24.3 ± 4.4 years. Group 1 included subjects (44% of patients) with asymmetry caused by a shift or lateralization of the mandibular body. Group 2 included subjects (39%) with a significant difference between the left and right ramus height with menton deviation to the short side. Group 3 included subjects (12%) with atypical asymmetry, including deviation of the menton to the short side, prominence of the angle/gonion on the larger side, and reverse maxillary canting. Group 4 included subjects (5%) with severe maxillary canting, ramus height differences, and menton deviation to the short side. In this study, patients with asymmetry were classified into 4 statistically distinct groups according to their anatomic features. This diagnostic classification method will assist in treatment planning for patients with facial asymmetry and may be used to explore the etiology of these variants of facial asymmetry. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Facial approximation-from facial reconstruction synonym to face prediction paradigm.
Stephan, Carl N
2015-05-01
Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.
Recovering Faces from Memory: The Distracting Influence of External Facial Features
ERIC Educational Resources Information Center
Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.
2012-01-01
Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…
Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.
Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P
2009-07-01
Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.
Sato, Wataru; Toichi, Motomi; Uono, Shota; Kochiyama, Takanori
2012-08-13
Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD.We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex-MTG-IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.
The Eyes Have It: Young Children's Discrimination of Age in Masked and Unmasked Facial Photographs.
ERIC Educational Resources Information Center
Jones, Gillian; Smith, Peter K.
1984-01-01
Investigates preschool children's ability (n = 30) to discriminate age, and subject's use of different facial areas in ranking facial photographs into age order. Results indicate subjects from 3 to 9 years can successfully rank the photos. Compared with other facial features, the eye region was most important for success in the age ranking task.…
Vascular Leiomyoma and Geniculate Ganglion
Magliulo, Giuseppe; Iannella, Giannicola; Valente, Michele; Greco, Antonio; Appiani, Mario Ciniglio
2013-01-01
Objectives Discussion of a rare case of angioleiomyoma involving the geniculate ganglion and the intratemporal facial nerve segment and its surgical treatment. Design Case report. Setting Presence of an expansive lesion englobing the geniculate ganglion without any lesion to the cerebellopontine angle. Participants A 45-year-old man with a grade III facial paralysis according to the House-Brackmann scale of evaluation. Main Outcomes Measure Surgical pathology, radiologic appearance, histological features, and postoperative facial function. Results Removal of the entire lesion was achieved, preserving the anatomic integrity of the nerve; no nerve graft was necessary. Postoperative histology and immunohistochemical studies revealed features indicative of solid vascular leiomyoma. Conclusion Angioleiomyoma should be considered in the differential diagnosis of geniculate ganglion lesions. Optimal postoperative facial function is possible only by preserving the anatomical and functional integrity of the facial nerve. PMID:23943721
Wirthlin, J; Kau, C H; English, J D; Pan, F; Zhou, H
2013-09-01
The objective of this study was to compare the facial morphologies of an adult Chinese population to a Houstonian white population. Three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera system, 3dMDface™. Using the system, 100 subjects from a Houstonian population and 71 subjects from a Chinese population were photographed. A complex mathematical algorithm was performed to generate a composite facial average (one for males and one for females) for each subgroup. The computer-generated facial averages were then superimposed based on a previously validated superimposition method. The facial averages were evaluated for differences. Distinct facial differences were evident between the subgroups evaluated. These areas included the nasal tip, the peri-orbital area, the malar process, the labial region, the forehead, and the chin. Overall, the mean facial difference between the Chinese and Houstonian female averages was 2.73±2.20mm, while the difference between the Chinese and Houstonian males was 2.83±2.20mm. The percent similarity for the female population pairings and male population pairings were 10.45% and 12.13%, respectively. The average adult Chinese and Houstonian faces possess distinct differences. Different populations and ethnicities have different facial features and averages that should be considered in the planning of treatment. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.
Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J
2012-11-01
Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.
Morphological Integration of Soft-Tissue Facial Morphology in Down Syndrome and Siblings
Starbuck, John; Reeves, Roger H.; Richtsmeier, Joan
2011-01-01
Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6–12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. PMID:21996933
Morphological integration of soft-tissue facial morphology in Down Syndrome and siblings.
Starbuck, John; Reeves, Roger H; Richtsmeier, Joan
2011-12-01
Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6-12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. 2011 Wiley Periodicals, Inc.
Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute
2016-01-01
Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335
Facial Nerve Schwannoma: A Case Report, Radiological Features and Literature Review.
Pilloni, Giulia; Mico, Barbara Massa; Altieri, Roberto; Zenga, Francesco; Ducati, Alessandro; Garbossa, Diego; Tartara, Fulvio
2017-12-22
Facial nerve schwannoma localized in the middle fossa is a rare lesion. We report a case of a facial nerve schwannoma in a 30-year-old male presenting with facial nerve palsy. Magnetic resonance imaging (MRI) showed a 3 cm diameter tumor of the right middle fossa. The tumor was removed using a sub-temporal approach. Intraoperative monitoring allowed for identification of the facial nerve, so it was not damaged during the surgical excision. Neurological clinical examination at discharge demonstrated moderate facial nerve improvement (Grade III House-Brackmann).
Women's attractiveness is linked to expected age at menopause.
Bovet, J; Barkat-Defradas, M; Durand, V; Faurie, C; Raymond, M
2018-02-01
A great number of studies have shown that features linked to immediate fertility explain a large part of the variance in female attractiveness. This is consistent with an evolutionary perspective, as men are expected to prefer females at the age at which fertility peaks (at least for short-term relationships) in order to increase their reproductive success. However, for long-term relationships, a high residual reproductive value (the expected future reproductive output, linked to age at menopause) becomes relevant as well. In that case, young age and late menopause are expected to be preferred by men. However, the extent to which facial features provide cues to the likely age at menopause has never been investigated so far. Here, we show that expected age at menopause is linked to facial attractiveness of young women. As age at menopause is heritable, we used the mother's age at menopause as a proxy for her daughter's expected age of menopause. We found that men judged faces of women with a later expected age at menopause as more attractive than those of women with an earlier expected age at menopause. This result holds when age, cues of immediate fertility and facial ageing were controlled for. Additionally, we found that the expected age at menopause was not correlated with any of the other variables considered (including immediate fertility cues and facial ageing). Our results show the existence of a new correlate of women's facial attractiveness, expected age at menopause, which is independent of immediate fertility cues and facial ageing. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Sclerosteosis involving the temporal bone: histopathologic aspects.
Nager, G T; Hamersma, H
1986-01-01
Sclerosteosis is a rare, potentially lethal, autosomal recessive, progressive craniotubular sclerosing bone dysplasia with characteristic facial and skeletal features. The temporal bone changes include a marked increase in overall size, extensive sclerosis, narrowing of the external auditory canal, and severe constriction of the internal auditory meatus, fallopian canal, eustachian tube, and middle ear cleft. Attenuation of the bony canals of the 9th, 10th, and 11th cranial nerves, reduction in size of the internal carotid artery, and severe obliteration of the sigmoid sinus and jugular bulb also occur. Loss of hearing, generally bilateral, is a frequent symptom. It often manifests in early childhood and initially is expressed as sound conduction impairment. Later, a sensorineural hearing loss and loss of vestibular nerve function often develop. Impairment of facial nerve function is another feature occasionally present at birth. In the beginning, a unilateral intermittent facial weakness may occur which eventually progresses to a bilateral permanent facial paresis. The histologic examination of the temporal bones from a patient with sclerosteosis explains the mechanisms involved in the progressive impairment of sound conduction and loss of cochlear, vestibular, and facial nerve function. There is a decrease of the arterial blood supply to the brain and an obstruction of the venous drainage from it. The histopathology reveals the obstacles to decompression of the middle ear cleft, ossicular chain, internal auditory and facial canals, and the risks, and in many instances the contraindications, to such procedures. On the other hand, decompression of the sigmoid sinus and jugular bulb should be considered as an additional life-saving procedure in conjunction with the prophylactic craniotomy recommended in all adult patients.
Goodman, Greg J; Armour, Katherine S; Kolodziejczyk, Julia K; Santangelo, Samantha; Gallagher, Conor J
2018-05-01
Australians are more exposed to higher solar UV radiation levels that accelerate signs of facial ageing than individuals who live in temperate northern countries. The severity and course of self-reported facial ageing among fair-skinned Australian women were compared with those living in Canada, the UK and the USA. Women voluntarily recruited into a proprietary opt-in survey panel completed an internet-based questionnaire about their facial ageing. Participants aged 18-75 years compared their features against photonumeric rating scales depicting degrees of severity for forehead, crow's feet and glabellar lines, tear troughs, midface volume loss, nasolabial folds, oral commissures and perioral lines. Data from Caucasian and Asian women with Fitzpatrick skin types I-III were analysed by linear regression for the impact of country (Australia versus Canada, the UK and the USA) on ageing severity for each feature, after controlling for age and race. Among 1472 women, Australians reported higher rates of change and significantly more severe facial lines (P ≤ 0.040) and volume-related features like tear troughs and nasolabial folds (P ≤ 0.03) than women from the other countries. More Australians also reported moderate to severe ageing for all features one to two decades earlier than US women. Australian women reported more severe signs of facial ageing sooner than other women and volume-related changes up to 20 years earlier than those in the USA, which may suggest that environmental factors also impact volume-related ageing. These findings have implications for managing their facial aesthetic concerns. © 2017 The Authors. Australasian Journal of Dermatology published by John Wiley and Sons Australia, Ltd on behalf of The Australasian College of Dermatologists.
NASA Astrophysics Data System (ADS)
Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi
2017-08-01
In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.
Ethnic and Gender Considerations in the Use of Facial Injectables: Asian Patients.
Liew, Steven
2015-11-01
Asians have distinct facial characteristics due to underlying skeletal and morphological features that differ greatly with those of whites. This together with the higher sun protection factor and the differences in the quality of the skin and soft tissue create a profound effect on their aging process. Understanding of these differences and their effects in the aging process in Asians is crucial in determining effective utilization and placement of injectable products to ensure optimal aesthetic outcomes. For younger Asian women, the main treatment goal is to address the inherent structural deficits through reshaping and the provision of facial support. Facial injectables are used to provide anterior projection, to reduce facial width, and to lengthen facial height. In the older group, the aim is for rejuvenation and also to address the underlying structural issues that has compounded due to age-related volume loss. Asian women requesting cosmetic procedures do not want to be Westernized but rather seeking to enhance and optimize their Asian ethnic features.
Costa, Tony Eduardo; Barbosa, Saulo de Matos; Pereira, Rodrigo Alvitos; Chaves Netto, Henrique Duque de Miranda
2018-01-01
Dentofacial deformities (DFD) presenting mainly as Class III malocclusions that require orthognathic surgery as a part of definitive treatment. Class III patients can have obvious signs such as increasing the chin projection and chin throat length, nasolabial folds, reverse overjet, and lack of upper lip support. However, Class III patients can present different facial patterns depending on the angulation of occlusal plane (OP), and only bite correction does not always lead to the improvement of the facial esthetic. We described two Class III patients with different clinical features and inclination of OP and had undergone different treatment planning based on 6 clinical features: (I) facial type; (II) upper incisor display at rest; (III) dental and gingival display on smile; (IV) soft tissue support; (V) chin projection; and (VI) lower lip projection. These patients were submitted to orthognathic surgery with different treatment plannings: a clockwise rotation and counterclockwise rotation of OP according to their facial features. The clinical features and OP inclination helped to define treatment planning by clockwise and counterclockwise rotations of the maxillomandibular complex, and two patients undergone to bimaxillary orthognathic surgery showed harmonic outcomes and stables after 2 years of follow-up. PMID:29854480
Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel
2017-12-01
Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.
Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y; Chater, Nick
2012-01-01
Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.
Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y.; Chater, Nick
2012-01-01
Background Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Methodology/Principal Findings Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Conclusions/Significance Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available. PMID:22470553
Combining facial dynamics with appearance for age estimation.
Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo
2015-06-01
Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.
Support vector machine-based facial-expression recognition method combining shape and appearance
NASA Astrophysics Data System (ADS)
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
2010-11-01
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Wiedemann-Steiner Syndrome With 2 Novel KMT2A Mutations.
Min Ko, Jung; Cho, Jae So; Yoo, Yongjin; Seo, Jieun; Choi, Murim; Chae, Jong-Hee; Lee, Hye-Ran; Cho, Tae-Joon
2017-02-01
Wiedemann-Steiner syndrome is a rare genetic disorder characterized by short stature, hairy elbows, facial dysmorphism, and developmental delay. It can also be accompanied by musculoskeletal anomalies such as muscular hypotonia and small hands and feet. Mutations in the KMT2A gene have only recently been identified as the cause of Wiedemann-Steiner syndrome; therefore, only 16 patients from 15 families have been described, and new phenotypic features continue to be added. In this report, we describe 2 newly identified patients with Wiedemann-Steiner syndrome who presented with variable severity. One girl exhibited developmental dysplasia of the hip and fibromatosis colli accompanied by other clinical features, including facial dysmorphism, hypertrichosis, patent ductus arteriosus, growth retardation, and borderline intellectual disability. The other patient, a boy, showed severe developmental retardation with automatic self-mutilation, facial dysmorphism, and hypertrichosis at a later age. Exome sequencing analysis of these patients and their parents revealed a de novo nonsense mutation, p.Gln1978*, of KMT2A in the former, and a missense mutation, p.Gly1168Asp, in the latter, which molecularly confirmed the diagnosis of Wiedemann-Steiner syndrome.
Cephalometric norms and esthetic profile preference for the Japanese: a systematic review.
Bronfman, Caroline Nemetz; Janson, Guilherme; Pinzan, Arnaldo; Rocha, Thais Lima
2015-01-01
To determine the cephalometric parameters and esthetic preferences of a pleasant face for the Japanese population. For the present study, the following databases were accessed: PubMed, Embase, Scopus and Web of Science. Initial inclusion criteria comprised studies written in English and quoting cephalometric norms and/or facial attractiveness in Japanese adults. No time period of publication was determined. The quality features evaluated were sample description, variables analyzed and how cephalometric standards or facial profile were evaluated. Initially, 60 articles were retrieved. From the selected studies, 13 abstracts met the initial inclusion criteria. They were divided into two groups; seven articles were included in Group I and six articles in Group II, according to the criteria of evaluation: cephalometric or facial analyses. Japanese are characterized by having a less convex skeletal profile, bilabial protrusion, less prominent nose, more retruded chin and protruded mandibular incisor. Despite living in a society with homogeneous patterns, they seem to get an esthetic preference for white-like features. Therefore, in addition to ethnic normative values, patient's preferences to establish individual treatment plans should always be considered.
Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation
Lusk, Laina G.; Mitchel, Aaron D.
2016-01-01
Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959
Stevens, Cathy A.; Lachman, Ralph S.
2011-01-01
We report on two sibs with a lethal form of bone dysplasia with distinctive skeletal findings including rhizomelic and mesomelic limb shortening, hooked clavicles, dumbbell femurs, and absence of talus and calcaneus ossification. Other clinical features include Dandy-Walker malformation, congenital heart defects, joint contractures, genital hypoplasia, and distinctive facial features. These sibs appear to have a previously undescribed skeletal dysplasia, which is most likely inherited in an autosomal recessive fashion. PMID:20602491
Seager, Dennis Craig; Kau, Chung How; English, Jeryl D; Tawfik, Wael; Bussa, Harry I; Ahmed, Abou El Yazeed M
2009-09-01
To compare the facial morphologies of an adult Egyptian population with those of a Houstonian white population. The three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMDface System photographed 186 subjects from two population groups (Egypt and Houston). All of the participants from both population groups were between 18 and 30 years of age and had no apparent facial anomalies. All facial images were overlaid and superimposed, and a complex mathematical algorithm was performed to generate a composite facial average (one male and one female) for each subgroup (EGY-M: Egyptian male subjects; EGY-F: Egyptian female subjects; HOU-M: Houstonian male subjects; and HOU-F: Houstonian female subjects). The computer-generated facial averages were superimposed based on a previously validated superimposition method, and the facial differences were evaluated and quantified. Distinct facial differences were evident between the subgroups evaluated, involving various regions of the face including the slant of the forehead, and the nasal, malar, and labial regions. Overall, the mean facial differences between the Egyptian and Houstonian female subjects were 1.33 +/- 0.93 mm, while the differences in Egyptian and Houstonian male subjects were 2.32 +/- 2.23 mm. The range of differences for the female population pairings and the male population pairings were 14.34 mm and 13.71 mm, respectively. The average adult Egyptian and white Houstonian face possess distinct differences. Different populations and ethnicities have different facial features and averages.
Developmental Change in Infant Categorization: The Perception of Correlations among Facial Features.
ERIC Educational Resources Information Center
Younger, Barbara
1992-01-01
Tested 7 and 10 month olds for perception of correlations among facial features. After habituation to faces displaying a pattern of correlation, 10 month olds generalized to a novel face that preserved the pattern of correlation but showed increased attention to a novel face that violated the pattern. (BC)
Facial Expression Influences Face Identity Recognition During the Attentional Blink
2014-01-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076
Facial expression influences face identity recognition during the attentional blink.
Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J
2014-12-01
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.
Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan
2018-07-01
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Hierarchical ensemble of global and local classifiers for face recognition.
Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen
2009-08-01
In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.
Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users
NASA Astrophysics Data System (ADS)
Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi
This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.
Closed dressings after laser skin resurfacing.
Newman, J P; Koch, R J; Goode, R L
1998-07-01
To evaluate the safety, efficacy, and patient acceptance of closed dressings after full facial resurfacing with the carbon dioxide laser. Prospective cohort of men and women undergoing full facial carbon dioxide laser resurfacing. Ambulatory surgical center at a university hospital. Forty consecutive patients randomized to 1 of 4 dressing groups. All patients underwent full facial resurfacing with a carbon dioxide laser system. One of 5 closed dressings (single- or 3-layer composite foam, plastic mesh, hydrogel, or polymer film) was placed immediately after the procedure. Closed dressings were changed on postoperative day 2 and removed on postoperative day 4. Objective postoperative criteria of erythema, scarring, reepithelialization, and surface irregularities were recorded and photodocumented. Comparisons were made among the closed dressing groups as well as with a group of historical control subjects treated with open dressings. The ease of application, office time for preparation and application, and cost of the individual dressings were collected. Patient characteristics of overall acceptance, comfort, and ease of maintenance were recorded with a visual analog scale. There were no complications of scarring, surface irregularities, or contact dermatitis from the application or maintenance of the closed dressings. There were no significant differences in the number of days of postoperative erythema or in the rate of facial reepithelialization among the groups. Most patients preferred not to continue with the closed dressings past 2 days. Positive features from the use of closed dressings included reduction in crust formation, decreased pruritus, decreased erythema, and decreased postoperative pain, compared with historical controls. Negative features included time in preparation and application of the dressings. Costs ranged from $9.79 to $50 per dressing change. Closed dressings are safe and offer benefits noted during the first 4 postoperative days. Patients can be expected to maintain a closed dressing for at least 24 hours but no longer than 4 days. The positive features of closed dressings and patient acceptance outweigh the cost and office time involved with their application and maintenance.
Martins, Luciana Flaquer; Vigorito, Julio Wilson
2013-01-01
To determine the characteristics of facial soft tissues at rest and wide smile, and their possible relation to the facial type. We analyzed a sample of forty-eight young female adults, aged between 19.10 and 40 years old, with a mean age of 30.9 years, who had balanced profile and passive lip seal. Cone beam computed tomographies were performed at rest and wide smile postures on the entire sample which was divided into three groups according to individual facial types. Soft tissue features analysis of the lips, nose, zygoma and chin were done in sagittal, axial and frontal axis tomographic views. No differences were observed in any of the facial type variables for the static analysis of facial structures at both rest and wide smile postures. Dynamic analysis showed that brachifacial types are more sensitive to movement, presenting greater sagittal lip contraction. However, the lip movement produced by this type of face results in a narrow smile, with smaller tooth exposure area when compared with other facial types. Findings pointed out that the position of the upper lip should be ahead of the lower lip, and the latter, ahead of the pogonion. It was also found that the facial type does not impact the positioning of these structures. Additionally, the use of cone beam computed tomography may be a valuable method to study craniofacial features.
Enea-Drapeau, Claire; Carlier, Michèle; Huguet, Pascal
2012-01-01
Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome), the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT), a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness) associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations). We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes), even among professional caregivers. These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.
An Inner Face Advantage in Children's Recognition of Familiar Peers
ERIC Educational Resources Information Center
Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang
2008-01-01
Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…
Artistic shaping of key facial features in children and adolescents.
Sullivan, P K; Singer, D P
2001-12-01
Facial aesthetics can be enhanced by otoplasty, rhinoplasty and genioplasty. Excellent outcomes can be obtained given appropriate timing, patient selection, preoperative planning, and artistic sculpting of the region with the appropriate surgical technique. Choosing a patient with mature psychological, developmental, and anatomic features that are amenable to treatment in the pediatric population can be challenging, yet rewarding.
Facial expression reconstruction on the basis of selected vertices of triangle mesh
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
Facial expression reconstruction is an important issue in the field of computer graphics. While it is relatively easy to create an animation based on meshes constructed through video recordings, this kind of high-quality data is often not transferred to another model because of lack of intermediary, anthropometry-based way to do so. However, if a high-quality mesh is sampled with sufficient density, it is possible to use obtained feature points to encode the shape of surrounding vertices in a way that can be easily transferred to another mesh with corresponding feature points. In this paper we present a method used for obtaining information for the purpose of reconstructing changes in facial surface on the basis of selected feature points.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.
Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong
2018-04-11
In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.
Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine
NASA Astrophysics Data System (ADS)
Lawi, Armin; Sya'Rani Machrizzandi, M.
2018-03-01
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo
2016-03-12
Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.
Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders
2012-01-01
Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD. PMID:22889284
Velo-Cardio-Facial Syndrome: 30 Years of Study
Shprintzen, Robert J.
2009-01-01
Velo-cardio-facial syndrome is one of the names that has been attached to one of the most common multiple anomaly syndromes in humans. The labels DiGeorge sequence, 22q11 deletion syndrome, conotruncal anomalies face syndrome, CATCH 22, and Sedlačková syndrome have all been attached to the same disorder. Velo-cardio-facial syndrome has an expansive phenotype with more than 180 clinical features described that involve essentially every organ and system. The syndrome has drawn considerable attention because a number of common psychiatric illnesses are phenotypic features including attention deficit disorder, schizophrenia, and bipolar disorder. The expression is highly variable with some individuals being essentially normal at the mildest end of the spectrum, and the most severe cases having life-threatening and life-impairing problems. The syndrome is caused by a microdeletion from chromosome 22 at the q11.2 band. Although the large majority of affected individuals have identical 3 megabase deletions, less than 10% of cases have smaller deletions of 1.5 or 2.0 megabases. The 3 megabase deletion encompasses a region containing 40 genes. The syndrome has a population prevalence of approximately 1:2,000 in the U.S., although incidence is higher. Although initially a clinical diagnosis, today velo-cardio-facial syndrome can be diagnosed with extremely high accuracy by fluorescence in situ hybridization (FISH) and several other laboratory techniques. Clinical management is age dependent with acute medical problems such as congenital heart disease, immune disorders, feeding problems, cleft palate, and developmental disorders occupying management in infancy and preschool years. Management shifts to cognitive, behavioral, and learning disorders during school years, and then to the potential for psychiatric disorders including psychosis in late adolescence and adult years. Although the majority of people with velo-cardio-facial syndrome do not develop psychosis, the risk for severe psychiatric illness is 25 times higher for people affected with velo-cardio-facial syndrome than the general population. Therefore, interest in understanding the nature of psychiatric illness in the syndrome remains strong. PMID:18636631
Hybrid generative-discriminative approach to age-invariant face recognition
NASA Astrophysics Data System (ADS)
Sajid, Muhammad; Shafique, Tamoor
2018-03-01
Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.
Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society.
Fattah, Adel Y; Gavilan, Javier; Hadlock, Tessa A; Marcus, Jeffrey R; Marres, Henri; Nduka, Charles; Slattery, William H; Snyder-Warwick, Alison K
2014-10-01
Facial palsy manifests a broad array of deficits affecting function, form, and psychological well-being. Assessment scales were introduced to standardize and document the features of facial palsy and to facilitate the exchange of information and comparison of outcomes. The aim of this study was to determine which assessment methodologies are currently employed by those involved in the care of patients with facial palsy as a first step toward the development of consensus on the appropriate assessments for this patient population. Online questionnaire. The Sir Charles Bell Society, a group of professionals dedicated to the care of patients with facial palsy, were surveyed to determine the scales used to document facial nerve function, patient reported outcome measures (PROM), and photographic documentation. Fifty-five percent of the membership responded (n = 83). Grading scales were used by 95%, most commonly the House-Brackmann and Sunnybrook scales. PROMs were used by 58%, typically the Facial Clinimetric Evaluation scale or Facial Disability Index. All used photographic recordings, but variability existed among the facial expressions used. Videography was performed by 82%, and mostly involved the same views as still photography; it was also used to document spontaneous movement and speech. Three-dimensional imaging was employed by 18% of respondents. There exists significant heterogeneity in assessments among clinicians, which impedes straightforward comparisons of outcomes following recovery and intervention. Widespread adoption of structured assessments, including scales, PROMs, photography, and videography, will facilitate communication and comparison among those who study the effects of interventions on this population. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans
Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred
2012-01-01
Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347
Ultrasonic measurement of facial tissue depth in a Northern Chinese Han population.
Jia, Linpei; Qi, Baiyu; Yang, Jingyan; Zhang, Weiguang; Lu, Yingqiang; Zhang, Hong-Liang
2016-02-01
In forensic anthropology, facial soft tissue depth measurement is crucial for craniofacial reconstruction technology, which is based on the morphological features of human faces to rebuild appearances of decedents, helps forensic scientists to identify the nameless bone. We measured the facial tissue depth of 135 young subjects from northern China whereby revealing the relationship among tissue depth, sex and BMI as well as providing data for craniofacial reconstruction in forensic science. All the volunteers are healthy medical students including 64 males and 71 females. Ultrasound was used to measure 19 points across the face evenly distributed in 6 regions including the eye, nose, mouth, cheek, jaw and chin. Our results indicate that tissue thickness at 11 points of females and 11 points of males are related to BMI. A majority of points are thicker in females than those of males. Further comparisons with data of American and European population show an apparent diversity in both genders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Common cues to emotion in the dynamic facial expressions of speech and song
Livingstone, Steven R.; Thompson, William F.; Wanderley, Marcelo M.; Palmer, Caroline
2015-01-01
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. PMID:25424388
Common cues to emotion in the dynamic facial expressions of speech and song.
Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline
2015-01-01
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.
Shu, Ting; Zhang, Bob
2015-04-01
Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.
Kocher, Katharina; Kowalski, Piotr; Kolokitha, Olga-Elpis; Katsaros, Christos; Fudalej, Piotr S
2016-05-01
To determine whether judgment of nasolabial esthetics in cleft lip and palate (CLP) is influenced by overall facial attractiveness. Experimental study. University of Bern, Switzerland. Seventy-two fused images (36 of boys, 36 of girls) were constructed. Each image comprised (1) the nasolabial region of a treated child with complete unilateral CLP (UCLP) and (2) the external facial features, i.e., the face with masked nasolabial region, of a noncleft child. Photographs of the nasolabial region of six boys and six girls with UCLP representing a wide range of esthetic outcomes, i.e., from very good to very poor appearance, were randomly chosen from a sample of 60 consecutively treated patients in whom nasolabial esthetics had been rated in a previous study. Photographs of external facial features of six boys and six girls without UCLP with various esthetics were randomly selected from patients' files. Eight lay raters evaluated the fused images using a 100-mm visual analogue scale. Method reliability was assessed by reevaluation of fused images after >1 month. A regression model was used to analyze which elements of facial esthetics influenced the perception of nasolabial appearance. Method reliability was good. A regression analysis demonstrated that only the appearance of the nasolabial area affected the esthetic scores of fused images (coefficient = -11.44; P < .001; R(2) = 0.464). The appearance of the external facial features did not influence perceptions of fused images. Cropping facial images for assessment of nasolabial appearance in CLP seems unnecessary. Instead, esthetic evaluation can be performed on images of full faces.
ERIC Educational Resources Information Center
Rutherford, M. D.; Walsh, Jennifer A.; Lee, Vivian
2015-01-01
Infants are interested in eyes, but look preferentially at mouths toward the end of the first year, when word learning begins. Language delays are characteristic of children developing with autism spectrum disorder (ASD). We measured how infants at risk for ASD, control infants, and infants who later reached ASD criterion scanned facial features.…
Reading Faces: From Features to Recognition.
Guntupalli, J Swaroop; Gobbini, M Ida
2017-12-01
Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M
2011-01-01
Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection.
Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.
2011-01-01
Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection. PMID:21991328
[Noonan syndrome can be diagnosed clinically and through molecular genetic analyses].
Henningsen, Marie Krab; Jelsig, Anne Marie; Andersen, Helle; Brusgaard, Klaus; Ousager, Lilian Bomme; Hertz, Jens Michael
2015-08-03
Noonan syndrome is part of the group of RASopathies caused by germ line mutations in genes involved in the RAS/MAPK pathway. There is substantial phenotypic overlap among the RASopathies. Diagnosis of Noonan syndrome is often based on clinical features including dysmorphic facial features, short stature and congenital heart disease. Rapid advances in sequencing technology have made molecular genetic analyses a helpful tool in diagnosing and distinguishing Noonan syndrome from other RASopathies.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition.
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Mapping the emotional face. How individual face parts contribute to successful emotion recognition
Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna
2017-01-01
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921
Learning the spherical harmonic features for 3-D face recognition.
Liu, Peijiang; Wang, Yunhong; Huang, Di; Zhang, Zhaoxiang; Chen, Liming
2013-03-01
In this paper, a competitive method for 3-D face recognition (FR) using spherical harmonic features (SHF) is proposed. With this solution, 3-D face models are characterized by the energies contained in spherical harmonics with different frequencies, thereby enabling the capture of both gross shape and fine surface details of a 3-D facial surface. This is in clear contrast to most 3-D FR techniques which are either holistic or feature based, using local features extracted from distinctive points. First, 3-D face models are represented in a canonical representation, namely, spherical depth map, by which SHF can be calculated. Then, considering the predictive contribution of each SHF feature, especially in the presence of facial expression and occlusion, feature selection methods are used to improve the predictive performance and provide faster and more cost-effective predictors. Experiments have been carried out on three public 3-D face datasets, SHREC2007, FRGC v2.0, and Bosphorus, with increasing difficulties in terms of facial expression, pose, and occlusion, and which demonstrate the effectiveness of the proposed method.
Human Facial Shape and Size Heritability and Genetic Correlations.
Cole, Joanne B; Manyama, Mange; Larson, Jacinda R; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Li, Mao; Mio, Washington; Klein, Ophir D; Santorico, Stephanie A; Hallgrímsson, Benedikt; Spritz, Richard A
2017-02-01
The human face is an array of variable physical features that together make each of us unique and distinguishable. Striking familial facial similarities underscore a genetic component, but little is known of the genes that underlie facial shape differences. Numerous studies have estimated facial shape heritability using various methods. Here, we used advanced three-dimensional imaging technology and quantitative human genetics analysis to estimate narrow-sense heritability, heritability explained by common genetic variation, and pairwise genetic correlations of 38 measures of facial shape and size in normal African Bantu children from Tanzania. Specifically, we fit a linear mixed model of genetic relatedness between close and distant relatives to jointly estimate variance components that correspond to heritability explained by genome-wide common genetic variation and variance explained by uncaptured genetic variation, the sum representing total narrow-sense heritability. Our significant estimates for narrow-sense heritability of specific facial traits range from 28 to 67%, with horizontal measures being slightly more heritable than vertical or depth measures. Furthermore, for over half of facial traits, >90% of narrow-sense heritability can be explained by common genetic variation. We also find high absolute genetic correlation between most traits, indicating large overlap in underlying genetic loci. Not surprisingly, traits measured in the same physical orientation (i.e., both horizontal or both vertical) have high positive genetic correlations, whereas traits in opposite orientations have high negative correlations. The complex genetic architecture of facial shape informs our understanding of the intricate relationships among different facial features as well as overall facial development. Copyright © 2017 by the Genetics Society of America.
Photogrammetric Analysis of Attractiveness in Indian Faces
Duggal, Shveta; Kapoor, DN; Verma, Santosh; Sagar, Mahesh; Lee, Yung-Seop; Moon, Hyoungjin
2016-01-01
Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population. PMID:27019809
Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.
Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A
2017-12-01
To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.
Facial Contrast Is a Cross-Cultural Cue for Perceiving Age
Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique
2017-01-01
Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast—the color and luminance difference between facial features and the surrounding skin—is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20–80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger. PMID:28790941
Discrimination of gender using facial image with expression change
NASA Astrophysics Data System (ADS)
Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji
2005-12-01
By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.
Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.
Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique
2015-01-01
Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.
Facial Contrast Is a Cross-Cultural Cue for Perceiving Age.
Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique
2017-01-01
Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast-the color and luminance difference between facial features and the surrounding skin-is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20-80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.
Facial Features: What Women Perceive as Attractive and What Men Consider Attractive
Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique
2015-01-01
Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness. PMID:26161954
Johal, Ama; Chaggar, Amrit; Zou, Li Fong
2018-03-01
The present study used the optical surface laser scanning technique to compare the facial features of patients aged 8-18 years presenting with Class I and Class III incisor relationship in a case-control design. Subjects with a Class III incisor relationship, aged 8-18 years, were age and gender matched with Class I control and underwent a 3-dimensional (3-D) optical surface scan of the facial soft tissues. Landmark analysis revealed Class III subjects displayed greater mean dimensions compared to the control group most notably between the ages of 8-10 and 17-18 years in both males and females, in respect of antero-posterior (P = 0.01) and vertical (P = 0.006) facial dimensions. Surface-based analysis, revealed the greatest difference in the lower facial region, followed by the mid-face, whilst the upper face remained fairly consistent. Significant detectable differences were found in the surface facial features of developing Class III subjects.
Dissociable roles of internal feelings and face recognition ability in facial expression decoding.
Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia
2016-05-15
The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.
Improving the Quality of Facial Composites Using a Holistic Cognitive Interview
ERIC Educational Resources Information Center
Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.
2008-01-01
Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…
Feature Selection on Hyperspectral Data for Dismount Skin Analysis
2014-03-27
19 2.4.1 Melanosome Estimation . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.2 Facial Recognition using...require compliant interaction in order to establish their identification. Previously, traditional facial recognition systems have been enhanced by HSI by...calculated as a fundamental method to differentiate between people [38]. In addition, the area of facial recognition has benefited from the rich spectral
Enhanced facial texture illumination normalization for face recognition.
Luo, Yong; Guan, Ye-Peng
2015-08-01
An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.
The 3-M syndrome. A heritable low birthweight dwarfism.
Van Goethem, H; Malvaux, P
1987-10-01
Two male siblings and one girl with the 3-M syndrome are reported. The main clinical features include low birthweight, proportionate dwarfism, hatched-shaped cranio-facial configuration, abnormalities of mouth and teeth, short broad neck with prominent trapezius, pectus deformity, transverse grooves of anterior chest, and winged scapulae.
NASA Astrophysics Data System (ADS)
Cui, Chen; Asari, Vijayan K.
2014-03-01
Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.
Cavoy, R
2013-09-01
Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.
Ardizzi, Martina; Sestito, Mariateresa; Martini, Francesca; Umiltà, Maria Alessandra; Ravera, Roberto; Gallese, Vittorio
2014-01-01
Age-group membership effects on explicit emotional facial expressions recognition have been widely demonstrated. In this study we investigated whether Age-group membership could also affect implicit physiological responses, as facial mimicry and autonomic regulation, to observation of emotional facial expressions. To this aim, facial Electromyography (EMG) and Respiratory Sinus Arrhythmia (RSA) were recorded from teenager and adult participants during the observation of facial expressions performed by teenager and adult models. Results highlighted that teenagers exhibited greater facial EMG responses to peers' facial expressions, whereas adults showed higher RSA-responses to adult facial expressions. The different physiological modalities through which young and adults respond to peers' emotional expressions are likely to reflect two different ways to engage in social interactions with coetaneous. Findings confirmed that age is an important and powerful social feature that modulates interpersonal interactions by influencing low-level physiological responses. PMID:25337916
Aspects of Facial Contrast Decrease with Age and Are Cues for Age Perception
Porcheron, Aurélie; Mauger, Emmanuelle; Russell, Richard
2013-01-01
Age is a primary social dimension. We behave differently toward people as a function of how old we perceive them to be. Age perception relies on cues that are correlated with age, such as wrinkles. Here we report that aspects of facial contrast–the contrast between facial features and the surrounding skin–decreased with age in a large sample of adult Caucasian females. These same aspects of facial contrast were also significantly correlated with the perceived age of the faces. Individual faces were perceived as younger when these aspects of facial contrast were artificially increased, but older when these aspects of facial contrast were artificially decreased. These findings show that facial contrast plays a role in age perception, and that faces with greater facial contrast look younger. Because facial contrast is increased by typical cosmetics use, we infer that cosmetics function in part by making the face appear younger. PMID:23483959
Astley, S J; Clarren, S K
1996-07-01
The purpose of this study was to demonstrate that a quantitative, multivariate case definition of the fetal alcohol syndrome (FAS) facial phenotype could be derived from photographs of individuals with FAS and to demonstrate how this case definition and photographic approach could be used to develop efficient, accurate, and precise screening tools, diagnostic aids, and possibly surveillance tools. Frontal facial photographs of 42 subjects (from birth to 27 years of age) with FAS were matched to 84 subjects without FAS. The study population was randomly divided in half. Group 1 was used to identify the facial features that best differentiated individuals with and without FAS. Group 2 was used for cross validation. In group 1, stepwise discriminant analysis identified three facial features (reduced palpebral fissure length/inner canthal distance ratio, smooth philtrum, and thin upper lip) as the cluster of features that differentiated individuals with and without FAS in groups 1 and 2 with 100% accuracy. Sensitivity and specificity were unaffected by race, gender, and age. The phenotypic case definition derived from photographs accurately distinguished between individuals with and without FAS, demonstrating the potential of this approach for developing screening, diagnostic, and surveillance tools. Further evaluation of the validity and generalizability of this method will be needed.
Thomas, Jayakar; Ragavi, B Sindhu; Raneesha, PK; Ahmed, N Ashwak; Cynthia, S; Manoharan, D; Manoharan, R
2013-01-01
Hallermann-Streiff syndrome (HSS) is a rare disorder characterized by dyscephalia, with facial and dental abnormalities. We report a 12-year-old female child who presented with abnormal facial features, dental abnormalities and sparse scalp hair. PMID:24082185
Obstructive Sleep Apnea in Women: Study of Speech and Craniofacial Characteristics
Tyan, Marina; Fernández Pozo, Rubén; Toledano, Doroteo; Lopez Gonzalo, Eduardo; Alcazar Ramirez, Jose Daniel; Hernandez Gomez, Luis Alfonso
2017-01-01
Background Obstructive sleep apnea (OSA) is a common sleep disorder characterized by frequent cessation of breathing lasting 10 seconds or longer. The diagnosis of OSA is performed through an expensive procedure, which requires an overnight stay at the hospital. This has led to several proposals based on the analysis of patients’ facial images and speech recordings as an attempt to develop simpler and cheaper methods to diagnose OSA. Objective The objective of this study was to analyze possible relationships between OSA and speech and facial features on a female population and whether these possible connections may be affected by the specific clinical characteristics in OSA population and, more specifically, to explore how the connection between OSA and speech and facial features can be affected by gender. Methods All the subjects are Spanish subjects suspected to suffer from OSA and referred to a sleep disorders unit. Voice recordings and photographs were collected in a supervised but not highly controlled way, trying to test a scenario close to a realistic clinical practice scenario where OSA is assessed using an app running on a mobile device. Furthermore, clinical variables such as weight, height, age, and cervical perimeter, which are usually reported as predictors of OSA, were also gathered. Acoustic analysis is centered in sustained vowels. Facial analysis consists of a set of local craniofacial features related to OSA, which were extracted from images after detecting facial landmarks by using the active appearance models. To study the probable OSA connection with speech and craniofacial features, correlations among apnea-hypopnea index (AHI), clinical variables, and acoustic and facial measurements were analyzed. Results The results obtained for female population indicate mainly weak correlations (r values between .20 and .39). Correlations between AHI, clinical variables, and speech features show the prevalence of formant frequencies over bandwidths, with F2/i/ being the most appropriate formant frequency for OSA prediction in women. Results obtained for male population indicate mainly very weak correlations (r values between .01 and .19). In this case, bandwidths prevail over formant frequencies. Correlations between AHI, clinical variables, and craniofacial measurements are very weak. Conclusions In accordance with previous studies, some clinical variables are found to be good predictors of OSA. Besides, strong correlations are found between AHI and some clinical variables with speech and facial features. Regarding speech feature, the results show the prevalence of formant frequency F2/i/ over the rest of features for the female population as OSA predictive feature. Although the correlation reported is weak, this study aims to find some traces that could explain the possible connection between OSA and speech in women. In the case of craniofacial measurements, results evidence that some features that can be used for predicting OSA in male patients are not suitable for testing female population. PMID:29109068
Vinkler, Chana; Leshinsky-Silver, Esther; Michelson, Marina; Haas, Dorothea; Lerman-Sagie, Tally; Lev, Dorit
2014-01-01
Genetic syndromes with proportionate severe short stature are rare. We describe two sisters born to nonconsanguineous parents with severe linear growth retardation, poor weight gain, microcephaly, characteristic facial features, cutaneous syndactyly of the toes, high myopia, and severe intellectual disability. During infancy and early childhood, the girls had transient hepatosplenomegaly and low blood cholesterol levels that normalized later. A thorough evaluation including metabolic studies, radiological, and genetic investigations were all normal. Cholesterol metabolism and transport were studied and no definitive abnormality was found. No clinical deterioration was observed and no metabolic crises were reported. After due consideration of other known hereditary causes of post-natal severe linear growth retardation, microcephaly, and intellectual disability, we propose that this condition represents a newly recognized autosomal recessive multiple congenital anomaly-intellectual disability syndrome. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Modelling multimodal expression of emotion in a virtual agent.
Pelachaud, Catherine
2009-12-12
Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.
Soft-tissue facial characteristics of attractive Chinese men compared to normal men.
Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing
2015-01-01
To compare the facial characteristics of attractive Chinese men with those of reference men. The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 "attractive" men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces.
The distinguishing motor features of cataplexy: a study from video-recorded attacks.
Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe
2018-05-01
To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.
Enea-Drapeau, Claire; Carlier, Michèle; Huguet, Pascal
2012-01-01
Background Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome), the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. Methodology/Principal Findings The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT), a well-known technique whereby response latency is used to capture the relative strength with which some groups of people—here photographed faces of typically developing children and children with T21—are automatically (without conscious awareness) associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations). We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes), even among professional caregivers. Conclusion These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people. PMID:22496796
Real-time face and gesture analysis for human-robot interaction
NASA Astrophysics Data System (ADS)
Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd
2010-05-01
Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.
Shy children are less sensitive to some cues to facial recognition.
Brunet, Paul M; Mondloch, Catherine J; Schmidt, Louis A
2010-02-01
Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about pairs of faces that differed in the appearance of individual features, the shape of the external contour, or the spacing among features; their parent completed the Colorado childhood temperament inventory (CCTI). Children who scored higher on CCTI shyness made more errors than their non-shy counterparts only when discriminating faces based on the spacing of features. Differences in accuracy were not related to other scales of the CCTI. In Study 2, we showed that these differences were face-specific and cannot be attributed to differences in task difficulty. Findings suggest that shy children are less sensitive to some cues to facial recognition possibly underlying their inability to distinguish certain facial emotions in others, leading to a cascade of secondary negative effects in social behaviour.
Gibboni, Robert R; Zimmerman, Prisca E; Gothard, Katalin M
2009-01-01
Scanpaths (the succession of fixations and saccades during spontaneous viewing) contain information about the image but also about the viewer. To determine the viewer-dependent factors in the scanpaths of monkeys, we trained three adult males (Macaca mulatta) to look for 3 s at images of conspecific facial expressions with either direct or averted gaze. The subjects showed significant differences on four basic scanpath parameters (number of fixations, fixation duration, saccade length, and total scanpath length) when viewing the same facial expression/gaze direction combinations. Furthermore, we found differences between monkeys in feature preference and in the temporal order in which features were visited on different facial expressions. Overall, the between-subject variability was larger than the within- subject variability, suggesting that scanpaths reflect individual preferences in allocating visual attention to various features in aggressive, neutral, and appeasing facial expressions. Individual scanpath characteristics were brought into register with the genotype for the serotonin transporter regulatory gene (5-HTTLPR) and with behavioral characteristics such as expression of anticipatory anxiety and impulsiveness/hesitation in approaching food in the presence of a potentially dangerous object.
Borrowed beauty? Understanding identity in Asian facial cosmetic surgery.
Aquino, Yves Saint James; Steinkamp, Norbert
2016-09-01
This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's Index, Web of Science, Sociological Abstracts, and Communication Abstracts using key terms "cosmetic surgery," "ethnic*," "ethics," "Asia*," and "Western*." The study included all types of papers written in English that discuss the debate on rhinoplasty and blepharoplasty in East Asians. No limit was put on date of publication. Combining both narrative and systematic review methods, a total of 31 articles were critically appraised on their contribution to ethical reflection founded on the debates regarding the surgical alteration of Asian features. Sources of knowledge were drawn from four main disciplines, including the humanities, medicine or surgery, communications, and economics. Focusing on cosmetic surgery perceived as a westernising practice, the key debate themes included authenticity of identity, interpersonal relationships and socio-economic utility in the context of Asian culture. The study shows how cosmetic surgery of ethnic features plays an important role in understanding female identity in the Asian context. Based on the debate themes authenticity of identity, interpersonal relationships, and socio-economic utility, this article argues that identity should be understood as less individualistic and more as relational and transformational in the Asian context. In addition, this article also proposes to consider cosmetic surgery of Asian features as an interplay of cultural imperialism and cultural nationalism, which can both be a source of social pressure to modify one's appearance.
Verma, Shyam; Vasani, Resham; Joshi, Rajiv; Phiske, Meghana; Punjabi, Pritesh; Toprani, Tushar
2016-01-01
The term facial acanthosis nigricans (FAN) lacks definition of precise clinical and histopathological features. We present a descriptive study of patients with FAN to define pigmentary patterns and estimate the prevalence of obesity and insulin resistance in these cases. It is a prospective study that included all patients with classical AN of the neck and/or other areas with facial acanthosis nigricans described as brown-to-black macular pigmentation with blurred ill-defined margins, found on the zygomatic and malar areas. The body mass index (BMI) and waist circumference (WC) of the included patients were used as parameters of obesity. Homeostatic Model of Assessment of Insulin Resistance (HOMA2 IR) was used as a parameter to evaluate insulin resistance. Histopathological features of the 6 skin biopsies that were possible were reviewed. Among the 102 included individuals, the patterns of facial pigmentation seen in addition to the classic pattern involving zygomatic and malar areas were a hyperpigmented band on the forehead in 59.80%, periorbital darkening in 17.64%, perioral darkening in 12.74%, and generalized darkening in 9.8% of cases. 85.29% of the males and 100% of the females were found to be obese. Varying degrees of insulin resistance was noted in 82.34% of the individuals. Six biopsies available for evaluation showed changes such as mild epidermal hyperplasia with prominent basal melanin, however, without the typical papillomatosis seen in AN of the flexures. We document an increased prevalence of obesity and insulin resistance in patients presenting with FAN and its presentations in addition to the classical description. We propose that FAN can be considered a cutaneous marker of insulin resistance and that HOMA2 IR can serve as a parameter of insulin resistance in such cases.
Verma, Shyam; Vasani, Resham; Joshi, Rajiv; Phiske, Meghana; Punjabi, Pritesh; Toprani, Tushar
2016-01-01
Introduction: The term facial acanthosis nigricans (FAN) lacks definition of precise clinical and histopathological features. We present a descriptive study of patients with FAN to define pigmentary patterns and estimate the prevalence of obesity and insulin resistance in these cases. Materials and Methods: It is a prospective study that included all patients with classical AN of the neck and/or other areas with facial acanthosis nigricans described as brown-to-black macular pigmentation with blurred ill-defined margins, found on the zygomatic and malar areas. The body mass index (BMI) and waist circumference (WC) of the included patients were used as parameters of obesity. Homeostatic Model of Assessment of Insulin Resistance (HOMA2 IR) was used as a parameter to evaluate insulin resistance. Histopathological features of the 6 skin biopsies that were possible were reviewed. Results: Among the 102 included individuals, the patterns of facial pigmentation seen in addition to the classic pattern involving zygomatic and malar areas were a hyperpigmented band on the forehead in 59.80%, periorbital darkening in 17.64%, perioral darkening in 12.74%, and generalized darkening in 9.8% of cases. 85.29% of the males and 100% of the females were found to be obese. Varying degrees of insulin resistance was noted in 82.34% of the individuals. Six biopsies available for evaluation showed changes such as mild epidermal hyperplasia with prominent basal melanin, however, without the typical papillomatosis seen in AN of the flexures. Conclusion: We document an increased prevalence of obesity and insulin resistance in patients presenting with FAN and its presentations in addition to the classical description. We propose that FAN can be considered a cutaneous marker of insulin resistance and that HOMA2 IR can serve as a parameter of insulin resistance in such cases. PMID:27990384
Internal representations reveal cultural diversity in expectations of facial expressions of emotion.
Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G
2012-02-01
Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."
Sad Facial Expressions Increase Choice Blindness
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
2018-01-01
Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926
Sad Facial Expressions Increase Choice Blindness.
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
2017-01-01
Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).
NASA Astrophysics Data System (ADS)
Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide
2017-01-01
Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.
Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique
2014-01-01
Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods. PMID:25389770
Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique
2014-01-01
Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods.
Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition
NASA Astrophysics Data System (ADS)
Rouabhia, C.; Tebbikh, H.
2008-06-01
Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).
The perceptual saliency of fearful eyes and smiles: A signal detection study
Saban, Muhammet Ikbal; Rotshtein, Pia
2017-01-01
Facial features differ in the amount of expressive information they convey. Specifically, eyes are argued to be essential for fear recognition, while smiles are crucial for recognising happy expressions. In three experiments, we tested whether expression modulates the perceptual saliency of diagnostic facial features and whether the feature’s saliency depends on the face configuration. Participants were presented with masked facial features or noise at perceptual conscious threshold. The task was to indicate whether eyes (experiments 1-3A) or a mouth (experiment 3B) was present. The expression of the face and its configuration (i.e. spatial arrangement of the features) were manipulated. Experiment 1 compared fearful with neutral expressions, experiments 2 and 3 compared fearful versus happy expressions. The detection accuracy data was analysed using Signal Detection Theory (SDT), to examine the effects of expression and configuration on perceptual precision (d’) and response bias (c), separately. Across all three experiments, fearful eyes were detected better (higher d’) than neutral and happy eyes. Eyes were more precisely detected than mouths, whereas smiles were detected better than fearful mouths. The configuration of the features had no consistent effects across the experiments on the ability to detect expressive features. But facial configuration affected consistently the response bias. Participants used a more liberal criterion for detecting the eyes in canonical configuration and fearful expression. Finally, the power in low spatial frequency of a feature predicted its discriminability index. The results suggest that expressive features are perceptually more salient with a higher d’ due to changes at the low-level visual properties, with emotions and configuration affecting perception through top-down processes, as reflected by the response bias. PMID:28267761
Neural bases of different cognitive strategies for facial affect processing in schizophrenia.
Fakra, Eric; Salgado-Pineda, Pilar; Delaveau, Pauline; Hariri, Ahmad R; Blin, Olivier
2008-03-01
To examine the neural basis and dynamics of facial affect processing in schizophrenic patients as compared to healthy controls. Fourteen schizophrenic patients and fourteen matched controls performed a facial affect identification task during fMRI acquisition. The emotional task included an intuitive emotional condition (matching emotional faces) and a more cognitively demanding condition (labeling emotional faces). Individual analysis for each emotional condition, and second-level t-tests examining both within-, and between-group differences, were carried out using a random effects approach. Psychophysiological interactions (PPI) were tested for variations in functional connectivity between amygdala and other brain regions as a function of changes in experimental conditions (labeling versus matching). During the labeling condition, both groups engaged similar networks. During the matching condition, schizophrenics failed to activate regions of the limbic system implicated in the automatic processing of emotions. PPI revealed an inverse functional connectivity between prefrontal regions and the left amygdala in healthy volunteers but there was no such change in patients. Furthermore, during the matching condition, and compared to controls, patients showed decreased activation of regions involved in holistic face processing (fusiform gyrus) and increased activation of regions associated with feature analysis (inferior parietal cortex, left middle temporal lobe, right precuneus). Our findings suggest that schizophrenic patients invariably adopt a cognitive approach when identifying facial affect. The distributed neocortical network observed during the intuitive condition indicates that patients may resort to feature-based, rather than configuration-based, processing and may constitute a compensatory strategy for limbic dysfunction.
Facial Recognition in a Group-Living Cichlid Fish.
Kohda, Masanori; Jordan, Lyndon Alexander; Hotta, Takashi; Kosaka, Naoya; Karino, Kenji; Tanaka, Hirokazu; Taniyama, Masami; Takeyama, Tomohiro
2015-01-01
The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.
[Advances in the research of pressure therapy for pediatric burn patients with facial scar].
Wei, Y T; Fu, J F; Li-Tsang, Z H P
2017-05-20
Facial scar and deformation caused by burn injury severely affect physical and psychological well-being of pediatric burn patients, which needs medical workers and pediatric burn patients' family members to pay much attention to and to perform early rehabilitation treatment. Pressure therapy is an important rehabilitative strategy for pediatric burn patients with facial scar, mainly including wearing headgears and transparent pressure facemasks, which have their own features. To achieve better treatment results, pressure therapy should be chosen according to specific condition of pediatric burn patients and combined with other assistant therapies. Successful rehabilitation for pediatric burn patients relies on cooperation of both family members of pediatric burn patients and society. Rehabilitation knowledge should be provided to parents of pediatric burn patients to acquire their full support and cooperation in order to achieve best therapeutic effects and ultimately to rebuild physical and psychological well-being of pediatric burn patients.
Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W
2015-08-01
The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi
2016-01-01
Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates. PMID:26761433
Facial Structure Predicts Sexual Orientation in Both Men and Women.
Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F
2015-07-01
Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.
ERIC Educational Resources Information Center
Robbins, Rachel A.; Shergill, Yaadwinder; Maurer, Daphne; Lewis, Terri L.
2011-01-01
Adults are expert at recognizing faces, in part because of exquisite sensitivity to the spacing of facial features. Children are poorer than adults at recognizing facial identity and less sensitive to spacing differences. Here we examined the specificity of the immaturity by comparing the ability of 8-year-olds, 14-year-olds, and adults to…
KBG syndrome involving a single-nucleotide duplication in ANKRD11
Kleyner, Robert; Malcolmson, Janet; Tegay, David; Ward, Kenneth; Maughan, Annette; Maughan, Glenn; Nelson, Lesa; Wang, Kai; Robison, Reid; Lyon, Gholson J.
2016-01-01
KBG syndrome is a rare autosomal dominant genetic condition characterized by neurological involvement and distinct facial, hand, and skeletal features. More than 70 cases have been reported; however, it is likely that KBG syndrome is underdiagnosed because of lack of comprehensive characterization of the heterogeneous phenotypic features. We describe the clinical manifestations in a male currently 13 years of age, who exhibited symptoms including epilepsy, severe developmental delay, distinct facial features, and hand anomalies, without a positive genetic diagnosis. Subsequent exome sequencing identified a novel de novo heterozygous single base pair duplication (c.6015dupA) in ANKRD11, which was validated by Sanger sequencing. This single-nucleotide duplication is predicted to lead to a premature stop codon and loss of function in ANKRD11, thereby implicating it as contributing to the proband's symptoms and yielding a molecular diagnosis of KBG syndrome. Before molecular diagnosis, this syndrome was not recognized in the proband, as several key features of the disorder were mild and were not recognized by clinicians, further supporting the concept of variable expressivity in many disorders. Although a diagnosis of cerebral folate deficiency has also been given, its significance for the proband's condition remains uncertain. PMID:27900361
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Soft-tissue facial characteristics of attractive Chinese men compared to normal men
Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing
2015-01-01
Objective: To compare the facial characteristics of attractive Chinese men with those of reference men. Materials and Methods: The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 “attractive” men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. Results: When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Conclusions: Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces. PMID:26221357
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Facial bacterial infections: folliculitis.
Laureano, Ana Cristina; Schwartz, Robert A; Cohen, Philip J
2014-01-01
Facial bacterial infections are most commonly caused by infections of the hair follicles. Wherever pilosebaceous units are found folliculitis can occur, with the most frequent bacterial culprit being Staphylococcus aureus. We review different origins of facial folliculitis, distinguishing bacterial forms from other infectious and non-infectious mimickers. We distinguish folliculitis from pseudofolliculitis and perifolliculitis. Clinical features, etiology, pathology, and management options are also discussed. Copyright © 2014. Published by Elsevier Inc.
Facial Redness Increases Men's Perceived Healthiness and Attractiveness.
Thorstenson, Christopher A; Pazda, Adam D; Elliot, Andrew J; Perrett, David I
2017-06-01
Past research has shown that peripheral and facial redness influences perceptions of attractiveness for men viewing women. The current research investigated whether a parallel effect is present when women rate men with varying facial redness. In four experiments, women judged the attractiveness of men's faces, which were presented with varying degrees of redness. We also examined perceived healthiness and other candidate variables as mediators of the red-attractiveness effect. The results show that facial redness positively influences ratings of men's attractiveness. Additionally, perceived healthiness was documented as a mediator of this effect, independent of other potential mediator variables. The current research emphasizes facial coloration as an important feature of social judgments.
Stepanova, Elena V; Strube, Michael J
2012-01-01
Participants (N = 106) performed an affective priming task with facial primes that varied in their skin tone and facial physiognomy, and, which were presented either in color or in gray-scale. Participants' racial evaluations were more positive for Eurocentric than for Afrocentric physiognomy faces. Light skin tone faces were evaluated more positively than dark skin tone faces, but the magnitude of this effect depended on the mode of color presentation. The results suggest that in affective priming tasks, faces might not be processed holistically, and instead, visual features of facial priming stimuli independently affect implicit evaluations.
Yu, Andrea C; Zambrano, Regina M; Cristian, Ingrid; Price, Sue; Bernhard, Birgitta; Zucker, Marc; Venkateswaran, Sunita; McGowan-Jordan, Jean; Armour, Christine M
2017-06-01
Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation. © 2017 Wiley Periodicals, Inc.
Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.
Nummenmaa, Lauri; Calvo, Manuel G
2015-04-01
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Neath-Tavares, Karly N.; Itier, Roxane J.
2017-01-01
Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934
Lewandowski, Zdzisław
2015-09-01
The project aimed at finding the answers to the following two questions: to what extent does a change in size, height or width of the selected facial features influence the assessment of likeness between an original female composite portrait and a modified one? And how does the sex of the person who judges the images have an impact on the perception of likeness of facial features? The first stage of the project consisted of creating the image of the averaged female faces. Then the basic facial features like eyes, nose and mouth were cut out of the averaged face and each of these features was transformed in three ways: its size was changed by reduction or enlargement, its height was modified through reduction or enlargement of the above-mentioned features and its width was altered through widening or narrowing. In each out of six feature alternation methods, intensity of modification reached up to 20% of the original size with changes every 2%. The features altered in such a way were again stuck onto the original faces and retouched. The third stage consisted of the assessment, performed by the judges of both sexes, of the extent of likeness between the averaged composite portrait (without any changes) and the modified portraits. The results indicate that there are significant differences in the assessment of likeness of the portraits with some features modified to the original ones. The images with changes in the size and height of the nose received the lowest scores on the likeness scale, which indicates that these changes were perceived by the subjects as the most important. The photos with changes in the height of lip vermillion thickness (the lip height), lip width and the height and width of eye slit, in turn, received high scores of likeness, in spite of big changes, which signifies that these modifications were perceived as less important when compared to the other features investigated.
Colloff, Melissa F; Flowe, Heather D
2016-06-01
False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.
Shu, Ting; Zhang, Bob; Yan Tang, Yuan
2017-04-01
Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Facial nerve paralysis associated with temporal bone masses.
Nishijima, Hironobu; Kondo, Kenji; Kagoya, Ryoji; Iwamura, Hitoshi; Yasuhara, Kazuo; Yamasoba, Tatsuya
2017-10-01
To investigate the clinical and electrophysiological features of facial nerve paralysis (FNP) due to benign temporal bone masses (TBMs) and elucidate its differences as compared with Bell's palsy. FNP assessed by the House-Brackmann (HB) grading system and by electroneurography (ENoG) were compared retrospectively. We reviewed 914 patient records and identified 31 patients with FNP due to benign TBMs. Moderate FNP (HB Grades II-IV) was dominant for facial nerve schwannoma (FNS) (n=15), whereas severe FNP (Grades V and VI) was dominant for cholesteatomas (n=8) and hemangiomas (n=3). The average ENoG value was 19.8% for FNS, 15.6% for cholesteatoma, and 0% for hemangioma. Analysis of the correlation between HB grade and ENoG value for FNP due to TBMs and Bell's palsy revealed that given the same ENoG value, the corresponding HB grade was better for FNS, followed by cholesteatoma, and worst in Bell's palsy. Facial nerve damage caused by benign TBMs could depend on the underlying pathology. Facial movement and ENoG values did not correlate when comparing TBMs and Bell's palsy. When the HB grade is found to be unexpectedly better than the ENoG value, TBMs should be included in the differential diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.
The semiology of febrile seizures: Focal features are frequent.
Takasu, Michihiko; Kubota, Tetsuo; Tsuji, Takeshi; Kurahashi, Hirokazu; Numoto, Shingo; Watanabe, Kazuyoshi; Okumura, Akihisa
2017-08-01
To clarify the semiology of febrile seizures (FS) and to determine the frequency of FS with symptoms suggestive of focal onset. FS symptoms in children were reported within 24h of seizure onset by the parents using a structured questionnaire consisting principally of closed-ended questions. We focused on events at seizure commencement, including changes in behavior and facial expression, and ocular and oral symptoms. We also investigated the autonomic and motor symptoms developing during seizures. The presence or absence of focal and limbic features was determined for each patient. The associations of certain focal and limbic features with patient characteristics were assessed. Information was obtained on FS in 106 children. Various events were recorded at seizure commencement. Behavioral changes were observed in 35 children, changes in facial expression in 53, ocular symptoms in 78, and oral symptoms in 90. In terms of events during seizures, autonomic symptoms were recognized in 78, and convulsive motor symptoms were recognized in 68 children. Focal features were evident in 81 children; 38 children had two or more such features. Limbic features were observed in 44 children, 9 of whom had two or more such features. There was no significant relationship between any patient characteristic and the numbers of focal or limbic features. The semiology of FS varied widely among children, and symptoms suggestive of focal onset were frequent. FS of focal onset may be more common than is generally thought. Copyright © 2017 Elsevier Inc. All rights reserved.
Dwarfism with gloomy face: a new syndrome with features of 3-M syndrome.
Le Merrer, M; Brauner, R; Maroteaux, P
1991-01-01
Nine children with primordial dwarfism are described and a new syndrome is delineated. The significant features of this syndrome include facial dysmorphism with gloomy face and very short stature, but no radiological abnormality or hormone deficiency. Mental development is normal. The mode of inheritance seems to be autosomal recessive because of consanguinity in three of the four sibships. Some overlap with the 3-M syndrome is discussed but the autonomy of the gloomy face syndrome seems to be real. Images PMID:2051454
Obstructive Sleep Apnea in Women: Study of Speech and Craniofacial Characteristics.
Tyan, Marina; Espinoza-Cuadros, Fernando; Fernández Pozo, Rubén; Toledano, Doroteo; Lopez Gonzalo, Eduardo; Alcazar Ramirez, Jose Daniel; Hernandez Gomez, Luis Alfonso
2017-11-06
Obstructive sleep apnea (OSA) is a common sleep disorder characterized by frequent cessation of breathing lasting 10 seconds or longer. The diagnosis of OSA is performed through an expensive procedure, which requires an overnight stay at the hospital. This has led to several proposals based on the analysis of patients' facial images and speech recordings as an attempt to develop simpler and cheaper methods to diagnose OSA. The objective of this study was to analyze possible relationships between OSA and speech and facial features on a female population and whether these possible connections may be affected by the specific clinical characteristics in OSA population and, more specifically, to explore how the connection between OSA and speech and facial features can be affected by gender. All the subjects are Spanish subjects suspected to suffer from OSA and referred to a sleep disorders unit. Voice recordings and photographs were collected in a supervised but not highly controlled way, trying to test a scenario close to a realistic clinical practice scenario where OSA is assessed using an app running on a mobile device. Furthermore, clinical variables such as weight, height, age, and cervical perimeter, which are usually reported as predictors of OSA, were also gathered. Acoustic analysis is centered in sustained vowels. Facial analysis consists of a set of local craniofacial features related to OSA, which were extracted from images after detecting facial landmarks by using the active appearance models. To study the probable OSA connection with speech and craniofacial features, correlations among apnea-hypopnea index (AHI), clinical variables, and acoustic and facial measurements were analyzed. The results obtained for female population indicate mainly weak correlations (r values between .20 and .39). Correlations between AHI, clinical variables, and speech features show the prevalence of formant frequencies over bandwidths, with F2/i/ being the most appropriate formant frequency for OSA prediction in women. Results obtained for male population indicate mainly very weak correlations (r values between .01 and .19). In this case, bandwidths prevail over formant frequencies. Correlations between AHI, clinical variables, and craniofacial measurements are very weak. In accordance with previous studies, some clinical variables are found to be good predictors of OSA. Besides, strong correlations are found between AHI and some clinical variables with speech and facial features. Regarding speech feature, the results show the prevalence of formant frequency F2/i/ over the rest of features for the female population as OSA predictive feature. Although the correlation reported is weak, this study aims to find some traces that could explain the possible connection between OSA and speech in women. In the case of craniofacial measurements, results evidence that some features that can be used for predicting OSA in male patients are not suitable for testing female population. ©Marina Tyan, Fernando Espinoza-Cuadros, Rubén Fernández Pozo, Doroteo Toledano, Eduardo Lopez Gonzalo, Jose Daniel Alcazar Ramirez, Luis Alfonso Hernandez Gomez. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 06.11.2017.
Aberrant patterns of visual facial information usage in schizophrenia.
Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M
2013-05-01
Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association
Wen, Yi Feng; Wong, Hai Ming; McGrath, Colman Patrick
2017-01-01
Existing studies on facial growth were mostly cross-sectional in nature and only a limited number of facial measurements were investigated. The purposes of this study were to longitudinally investigate facial growth of Chinese in Hong Kong from 12 through 15 to 18 years of age and to compare the magnitude of growth changes between genders. Standardized frontal and lateral facial photographs were taken from 266 (149 females and 117 males) and 265 (145 females and 120 males) participants, respectively, at all three age levels. Linear and angular measurements, profile inclinations, and proportion indices were recorded. Statistical analyses were performed to investigate growth changes of facial features. Comparisons were made between genders in terms of the magnitude of growth changes from ages 12 to 15, 15 to 18, and 12 to 18 years. For the overall face, all linear measurements increased significantly (p < 0.05) except for height of the lower profile in females (p = 0.069) and width of the face in males (p = 0.648). In both genders, the increase in height of eye fissure was around 10% (p < 0.001). There was significant decrease in nasofrontal angle (p < 0.001) and increase in nasofacial angle (p < 0.001) in both genders and these changes were larger in males. Vermilion-total upper lip height index remained stable in females (p = 0.770) but increased in males (p = 0.020). Nasofrontal angle (effect size: 0.55) and lower vermilion contour index (effect size: 0.59) demonstrated large magnitude of gender difference in the amount of growth changes from 12 to 18 years. Growth changes of facial features and gender differences in the magnitude of facial growth were determined. The findings may benefit different clinical specialties and other nonclinical fields where facial growth are of interest.
Jiang, Xi; Zhang, Yu; Chen, Bo; Lin, Ye
2017-04-01
Extraction socket remodeling and ridge preservation strategies have been extensively explored. To evaluate the efficacy of applying a micro-titanium stent as a pressure bearing device on extraction socket remodeling of maxillary anterior tooth. Twenty-four patients with a extraction socket of maxillary incisor were treated with spontaneous healing (control group) or by applying a micro-titanium stent as a facial pressure bearing device over the facial bone wall (test group). Two virtual models obtained from cone beam computed tomography data before extraction and 4 months after healing were 3-dimenionally superimposed. Facial bone wall resorption, extraction socket remodeling features and ridge width preservation rate were determined and compared between the groups. Thin facial bone wall resulted in marked resorption in both groups. The greatest palatal shifting distance of facial bone located at the coronal level in the control group, but middle level in the test group. Compared with the original extraction socket, 87.61 ± 5.88% ridge width was preserved in the test group and 55.09 ± 14.46% in the control group. Due to the facial pressure bearing property, the rigid micro-titanium stent might preserve the ridge width and alter the resorption features of extraction socket. © 2016 Wiley Periodicals, Inc.
Face processing in chronic alcoholism: a specific deficit for emotional features.
Maurage, P; Campanella, S; Philippot, P; Martin, S; de Timary, P
2008-04-01
It is well established that chronic alcoholism is associated with a deficit in the decoding of emotional facial expression (EFE). Nevertheless, it is still unclear whether this deficit is specifically for emotions or due to a more general impairment in visual or facial processing. This study was designed to clarify this issue using multiple control tasks and the subtraction method. Eighteen patients suffering from chronic alcoholism and 18 matched healthy control subjects were asked to perform several tasks evaluating (1) Basic visuo-spatial and facial identity processing; (2) Simple reaction times; (3) Complex facial features identification (namely age, emotion, gender, and race). Accuracy and reaction times were recorded. Alcoholic patients had a preserved performance for visuo-spatial and facial identity processing, but their performance was impaired for visuo-motor abilities and for the detection of complex facial aspects. More importantly, the subtraction method showed that alcoholism is associated with a specific EFE decoding deficit, still present when visuo-motor slowing down is controlled for. These results offer a post hoc confirmation of earlier data showing an EFE decoding deficit in alcoholism by strongly suggesting a specificity of this deficit for emotions. This may have implications for clinical situations, where emotional impairments are frequently observed among alcoholic subjects.
Eruption of the permanent maxillary canines in relation to mandibular second molar maturity.
Perinetti, Giuseppe; Callovi, Marilena; Salgarello, Stefano; Biasotto, Matteo; Contardo, Luca
2013-07-01
To evaluate the timing of spontaneous maxillary canine eruption in relation to stages of mandibular second molar maturation. Potential confounding effects from such factors as age, growth phase, and facial features were also explored. A sample of 106 healthy subjects (48 females and 58 males; age range, 9.4-14.3 years) with both permanent maxillary canines during the final phase of intraoral eruption were included. Mandibular second molar maturation (stages E to H) was assessed according to the method of Demirjian. Skeletal maturity was determined using the cervical vertebral maturational (CVM) method. Facial vertical and sagittal relationships were evaluated by recording the Sella-Nasion/mandibular plane (SN/MP) angle and the ANB angle. An ordered multiple logistic regression was run to evaluate adjusted correlation of each parameter with the mandibular second molar maturational stage. Overall, the prevalence of the different second molar maturational stages was 36.8%, 37.8%, and 27.4% for stages E, F and G, respectively. According to the regression model, this relation was not influenced by sex, CVM stage, SN/MP angle, and ANB angle. Irrespective of sex, growth phase, and facial features, the maturational stage of the mandibular second molar may be a reliable indicator for the timing of spontaneous eruption of the maxillary canine.
Recombination of an intrachromosomal paracentric insertion of chromosome 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Best, R.G.; Burnett, W.J.; Brock, J.K.
1994-09-01
Cytogenetic studies were initiated on a newborn female due to multiple congenital anomalies including microcephaly, clinodactyly, abnormal positioning of hands, left facial palsy, heart defect, sacral dimple, and facial dysmorphic features. Facial features were described as low set rotated ears, nystagmus, and a small, flattened nose. A structural rearrangement of the long arm of chromosome 3 was observed with a complex banding pattern. Study of parental chromosomes revealed a normal male pattern for the father, and an intrachromosomal insertion on the long arm of chromosome 3 for the mother described as 46,XX,dir ins(3)(q21q23q26.2). Further characterization of the proband`s structurally abnormalmore » chromosome 3 revealed a karyotype best described as: 46,XX,rec(3),dupq23{r_arrow}q26.2::q21{r_arrow}q23,dir ins(3)(q21q23q26.2), which is a partial duplication of both the inserted segment as well as the intervening segment between the inserted segment and the insertion site. This would appear to be the result of a three-strand double cross-over within the insertion loop. Molecular cytogenetic studies are presently underway to further elucidate chromosome structure of the proband and her mother.« less
Choi, Hyoung Ju; Shin, Sung Hee
2016-08-01
The purpose of this study was to examine the effects of a facial muscle exercise program including facial massage on the facial muscle function, subjective symptoms related to paralysis and depression in patients with facial palsy. This study was a quasi-experimental research with a non-equivalent control group non-synchronized design. Participants were 70 patients with facial palsy (experimental group 35, control group 35). For the experimental group, the facial muscular exercise program including facial massage was performed 20 minutes a day, 3 times a week for two weeks. Data were analyzed using descriptive statistics, χ²-test, Fisher's exact test and independent sample t-test with the SPSS 18.0 program. Facial muscular function of the experimental group improved significantly compared to the control group. There was no significant difference in symptoms related to paralysis between the experimental group and control group. The level of depression in the experimental group was significantly lower than the control group. Results suggest that a facial muscle exercise program including facial massage is an effective nursing intervention to improve facial muscle function and decrease depression in patients with facial palsy.
Lee, Cha Gon; Park, Sang-Jin; Yim, Shin-Young; Sohn, Young Bae
2013-08-01
Potocki-Lupski syndrome (PTLS [MIM 610883]) is a recently recognized microduplication syndrome associated with 17p11.2. It is characterized by mild facial dysmorphic features, hypermetropia, infantile hypotonia, failure to thrive, mental retardation, autistic spectrum disorders, behavioral abnormalities, sleep apnea, and cardiovascular anomalies. In several studies, the critical PTLS region was deduced to be 1.3Mb in length, and included RAI1 and 17 other genes. We report a 3-year-old Korean boy with the smallest duplication in 17p11.2 and a milder phenotype. He had no family history of neurologic disease or developmental delay and no history of seizure, autistic features, or behavior problems. He showed subtle facial dysmorphic features (dolichocephaly and a mildly asymmetric smile) and flat feet. All laboratory tests were normal and he had no evidence of internal organ anomalies. He was found to have mild intellectual disabilities (full scale IQ 65 on K-WPPSI) and language developmental delay (age of 2.2year-old on PRESS). Array comparative genomic hybridization (CGH) showed about a 0.25Mb microduplication on chromosome 17p11.2 containing four Refseq (NCBI reference sequence) genes, including RAI1 [arr 17p11.2(17,575,978-17,824,623)×3]. When compared with previously reported cases, the milder phenotype of our patient may be associated with the smallest duplication in 17p11.2, 0.25Mb in length. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Batool, Nazre; Chellappa, Rama
2014-09-01
Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.
Sotos syndrome: An interesting disorder with gigantism.
Nalini, A; Biswas, Arundhati
2008-07-01
We report the case of a 16-year-old boy diagnosed to have Sotos syndrome, with rare association of bilateral primary optic atrophy and epilepsy. He presented with accelerated linear growth, facial gestalt, distinctive facial features, seizures and progressive diminution of vision in both eyes. He had features of gigantism from early childhood. An MRI showed that brain and endocrine functions were normal. This case is of interest, as we have to be aware of this not so rare disorder. In addition to the classic features, there were two unusual associations with Sotos syndrome in the patient.
Sotos syndrome: An interesting disorder with gigantism
Nalini, A.; Biswas, Arundhati
2008-01-01
We report the case of a 16-year-old boy diagnosed to have Sotos syndrome, with rare association of bilateral primary optic atrophy and epilepsy. He presented with accelerated linear growth, facial gestalt, distinctive facial features, seizures and progressive diminution of vision in both eyes. He had features of gigantism from early childhood. An MRI showed that brain and endocrine functions were normal. This case is of interest, as we have to be aware of this not so rare disorder. In addition to the classic features, there were two unusual associations with Sotos syndrome in the patient. PMID:19893668
Patterns of Eye Movements When Observers Judge Female Facial Attractiveness
Zhang, Yan; Wang, Xiaoying; Wang, Juan; Zhang, Lili; Xiang, Yu
2017-01-01
The purpose of the present study is to explore the fixed model for the explicit judgments of attractiveness and infer which features are important to judge the facial attractiveness. Behavioral studies on the perceptual cues for female facial attractiveness implied three potentially important features: averageness, symmetry, and sexual dimorphy. However, these studies did not explained which regions of facial images influence the judgments of attractiveness. Therefore, the present research recorded the eye movements of 24 male participants and 19 female participants as they rated a series of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking. PMID:29209242
Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.
Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál
2014-02-01
Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Patterns of Eye Movements When Observers Judge Female Facial Attractiveness.
Zhang, Yan; Wang, Xiaoying; Wang, Juan; Zhang, Lili; Xiang, Yu
2017-01-01
The purpose of the present study is to explore the fixed model for the explicit judgments of attractiveness and infer which features are important to judge the facial attractiveness. Behavioral studies on the perceptual cues for female facial attractiveness implied three potentially important features: averageness, symmetry, and sexual dimorphy. However, these studies did not explained which regions of facial images influence the judgments of attractiveness. Therefore, the present research recorded the eye movements of 24 male participants and 19 female participants as they rated a series of 30 photographs of female facial attractiveness. Results demonstrated the following: (1) Fixation is longer and more frequent on the noses of female faces than on their eyes and mouths (no difference exists between the eyes and the mouth); (2) The average pupil diameter at the nose region is bigger than that at the eyes and mouth (no difference exists between the eyes and the mouth); (3) the number of fixations of male participants was significantly more than female participants. (4) Observers first fixate on the eyes and mouth (no difference exists between the eyes and the mouth) before fixating on the nose area. In general, participants attend predominantly to the nose to form attractiveness judgments. The results of this study add a new dimension to the existing literature on judgment of facial attractiveness. The major contribution of the present study is the finding that the area of the nose is vital in the judgment of facial attractiveness. This finding establish a contribution of partial processing on female facial attractiveness judgments during eye-tracking.
Recovering faces from memory: the distracting influence of external facial features.
Frowd, Charlie D; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H; Hancock, Peter J B
2012-06-01
Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.
Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease
Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul
2016-01-01
According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393
Joint Patch and Multi-label Learning for Facial Action Unit Detection
Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang
2016-01-01
The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243
Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation
Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro
2014-01-01
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636
Facial emotion recognition in Parkinson's disease: A review and new hypotheses
Vérin, Marc; Sauleau, Paul; Grandjean, Didier
2018-01-01
Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
The morphometrics of "masculinity" in human faces.
Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B; Schaefer, Katrin
2015-01-01
In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features--the masculinity shape scores--were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity.
Facial expression recognition under partial occlusion based on fusion of global and local features
NASA Astrophysics Data System (ADS)
Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji
2018-04-01
Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.
A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.
Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito
2017-12-01
Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or <0.5. We observed that the gene expression in Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.
Park, Kyu Hyung; Kim, Yong-Kyu; Woo, Se Joon; Kang, Se Woong; Lee, Won Ki; Choi, Kyung Seek; Kwak, Hyung Woo; Yoon, Ill Han; Huh, Kuhl; Kim, Jong Woo
2014-06-01
Iatrogenic occlusion of the ophthalmic artery and its branches is a rare but devastating complication of cosmetic facial filler injections. To investigate clinical and angiographic features of iatrogenic occlusion of the ophthalmic artery and its branches caused by cosmetic facial filler injections. Data from 44 patients with occlusion of the ophthalmic artery and its branches after cosmetic facial filler injections were obtained retrospectively from a national survey completed by members of the Korean Retina Society from 27 retinal centers. Clinical features were compared between patients grouped by angiographic findings and injected filler material. Visual prognosis and its relationship to angiographic findings and injected filler material. Ophthalmic artery occlusion was classified into 6 types according to angiographic findings. Twenty-eight patients had diffuse retinal and choroidal artery occlusions (ophthalmic artery occlusion, generalized posterior ciliary artery occlusion, and central retinal artery occlusion). Sixteen patients had localized occlusions (localized posterior ciliary artery occlusion, branch retinal artery occlusion, and posterior ischemic optic neuropathy). Patients with diffuse occlusions showed worse initial and final visual acuity and less visual gain compared with those having localized occlusions. Patients receiving autologous fat injections (n = 22) had diffuse ophthalmic artery occlusions, worse visual prognosis, and a higher incidence of combined brain infarction compared with patients having hyaluronic acid injections (n = 13). Clinical features of iatrogenic occlusion of the ophthalmic artery and its branches following cosmetic facial filler injections were diverse according to the location and extent of obstruction and the injected filler material. Autologous fat injections were associated with a worse visual prognosis and a higher incidence of combined cerebral infarction. Extreme caution and care should be taken during these injections, and physicians should be aware of a diverse spectrum of complications following cosmetic facial filler injections.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161
Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu
2013-01-01
Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.
Fink, B; Matts, P J; Brauckmann, C; Gundlach, S
2018-04-01
Previous studies investigating the effects of skin surface topography and colouration cues on the perception of female faces reported a differential weighting for the perception of skin topography and colour evenness, where topography was a stronger visual cue for the perception of age, whereas skin colour evenness was a stronger visual cue for the perception of health. We extend these findings in a study of the effect of skin surface topography and colour evenness cues on the perceptions of facial age, health and attractiveness in males. Facial images of six men (aged 40 to 70 years), selected for co-expression of lines/wrinkles and discolouration, were manipulated digitally to create eight stimuli, namely, separate removal of these two features (a) on the forehead, (b) in the periorbital area, (c) on the cheeks and (d) across the entire face. Omnibus (within-face) pairwise combinations, including the original (unmodified) face, were presented to a total of 240 male and female judges, who selected the face they considered younger, healthier and more attractive. Significant effects were detected for facial image choice, in response to skin feature manipulation. The combined removal of skin surface topography resulted in younger age perception compared with that seen with the removal of skin colouration cues, whereas the opposite pattern was found for health preference. No difference was detected for the perception of attractiveness. These perceptual effects were seen particularly on the forehead and cheeks. Removing skin topography cues (but not discolouration) in the periorbital area resulted in higher preferences for all three attributes. Skin surface topography and colouration cues affect the perception of age, health and attractiveness in men's faces. The combined removal of these features on the forehead, cheeks and in the periorbital area results in the most positive assessments. © 2018 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Facial measurements for frame design.
Tang, C Y; Tang, N; Stewart, M C
1998-04-01
Anthropometric data for the purpose of spectacle frame design are scarce in the literature. Definitions of facial features to be measured with existing systems of facial measurement are often not specific enough for frame design and manufacturing. Currently, for individual frame design, experienced personnel collect data with facial rules or instruments. A new measuring system is proposed, making use of a template in the form of a spectacle frame. Upon fitting the template onto a subject, most of the measuring references can be defined. Such a system can be administered by lesser-trained personnel and can be used for researches covering a larger population.
The Emotional Modulation of Facial Mimicry: A Kinematic Study.
Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana
2017-01-01
It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.
Rigid Facial Motion Influences Featural, But Not Holistic, Face Processing
Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang
2012-01-01
We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561
The Auditory Kuleshov Effect: Multisensory Integration in Movie Editing.
Baranowski, Andreas M; Hecht, H
2017-05-01
Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which different objects were added to a given film scene featuring a neutral face. It is said that the audience interpreted the unchanged facial expression as a function of the added object (e.g., an added soup made the face express hunger). This interaction effect has been dubbed "Kuleshov effect." In the current study, we explored the role of sound in the evaluation of facial expressions in films. Thirty participants watched different clips of faces that were intercut with neutral scenes, featuring either happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that the music significantly influenced participants' emotional judgments of facial expression. Thus, the intersensory effects of music are more specific than previously thought. They alter the evaluation of film scenes and can give meaning to ambiguous situations.
The Face of Noonan Syndrome: Does Phenotype Predict Genotype
Allanson, Judith E.; Bohring, Axel; Dorr, Helmuth-Guenther; Dufke, Andreas; Gillessen-Kaesbach, Gabrielle; Horn, Denise; König, Rainer; Kratz, Christian P.; Kutsche, Kerstin; Pauli, Silke; Raskin, Salmo; Rauch, Anita; Turner, Anne; Wieczorek, Dagmar; Zenker, Martin
2011-01-01
The facial photographs of 81 individuals with Noonan syndrome, from infancy to adulthood, have been evaluated by two dysmorphologists (JA and MZ), each of whom has considerable experience with disorders of the Ras/MAPK pathway. Thirty-two of this cohort have PTPN11 mutations, 21 SOS1 mutations, 11 RAF1 mutations, and 17 KRAS mutations. The facial appearance of each person was judged to be typical of Noonan syndrome or atypical. In each gene category both typical and unusual faces were found. We determined that some individuals with mutations in the most commonly affected gene, PTPN11, which is correlated with the cardinal physical features, may have a quite atypical face. Conversely, some individuals with KRAS mutations, which may be associated with a less characteristic intellectual phenotype and a resemblance to Costello and cardio-facio-cutaneous syndromes, can have a very typical face. Thus, the facial phenotype, alone, is insufficient to predict the genotype, but certain facial features may facilitate an educated guess in some cases. PMID:20602484
Novel dynamic Bayesian networks for facial action element recognition and understanding
NASA Astrophysics Data System (ADS)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
2011-12-01
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.
Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L
2010-09-01
To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.
Neath-Tavares, Karly N; Itier, Roxane J
2016-09-01
Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. Copyright © 2016 Elsevier B.V. All rights reserved.
Information processing of motion in facial expression and the geometry of dynamical systems
NASA Astrophysics Data System (ADS)
Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.
2005-01-01
An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.
Face processing in autism: Reduced integration of cross-feature dynamics.
Shah, Punit; Bird, Geoffrey; Cook, Richard
2016-02-01
Characteristic problems with social interaction have prompted considerable interest in the face processing of individuals with Autism Spectrum Disorder (ASD). Studies suggest that reduced integration of information from disparate facial regions likely contributes to difficulties recognizing static faces in this population. Recent work also indicates that observers with ASD have problems using patterns of facial motion to judge identity and gender, and may be less able to derive global motion percepts. These findings raise the possibility that feature integration deficits also impact the perception of moving faces. To test this hypothesis, we examined whether observers with ASD exhibit susceptibility to a new dynamic face illusion, thought to index integration of moving facial features. When typical observers view eye-opening and -closing in the presence of asynchronous mouth-opening and -closing, the concurrent mouth movements induce a strong illusory slowing of the eye transitions. However, we find that observers with ASD are not susceptible to this illusion, suggestive of weaker integration of cross-feature dynamics. Nevertheless, observers with ASD and typical controls were equally able to detect the physical differences between comparison eye transitions. Importantly, this confirms that observers with ASD were able to fixate the eye-region, indicating that the striking group difference has a perceptual, not attentional origin. The clarity of the present results contrasts starkly with the modest effect sizes and equivocal findings seen throughout the literature on static face perception in ASD. We speculate that differences in the perception of facial motion may be a more reliable feature of this condition. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
[Facial femalization in transgenders].
Yahalom, R; Blinder, D; Nadel, S
2015-07-01
Transsexualism is a gender identity disorder in which there is a strong desire to live and be accepted as a member of the opposite sex. In male-to-female transsexuals with strong masculine facial features, facial feminization surgery is performed as part of the gender reassignment. A strong association between femininity and attractiveness has been attributed to the upper third of the face and the interplay of the glabellar prominence of the forehead. Studies have shown that a certain lower jaw shape is characteristic of males with special attention to the strong square mandibular angle and chin and also suggest that the attractive female jaw is smaller with a more round shape mandibular angles and a pointy chin. Other studies have shown that feminization of the forehead through cranioplasty have the most significant impact in determining the gender of a patient. Facial feminization surgeries are procedures aimed to change the features of the male face to that of a female face. These include contouring of the forehead, brow lift, mandible angle reduction, genioplasty, rhinoplasty and a variety of soft tissue adjustments. In our maxillofacial surgery department at the Sheba Medical Center we perform forehead reshaping combining with brow lift and at the same surgery, mandibular and chin reshaping to match the remodeled upper third of the face. The forehead reshaping is done by cranioplasty with additional reduction of the glabella area by burring of the frontal bone. After reducing the frontal bossing around the superior orbital rims we manage the soft tissue to achieve the brow lift. The mandibular reshaping, is performed by intraoral approach and include contouring of the angles by osteotomy for a more round shape (rather than the manly square shape angles), as well as reshaping of the bone in the chin area in order to make it more pointy, by removing the lateral parts of the chin and in some cases performing also genioplasty reduction by AP osteotomy.
The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults
LoBue, Vanessa; Thrasher, Cat
2014-01-01
Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants. PMID:25610415
The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.
LoBue, Vanessa; Thrasher, Cat
2014-01-01
Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.
Greater perceptual sensitivity to happy facial expression.
Maher, Stephen; Ekstrom, Tor; Chen, Yue
2014-01-01
Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2016-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
Schmidt, H; Rudolph, G; Hergersberg, M; Schneider, K; Moradi, S; Meitinger, T
2001-02-01
We report on a consanguineous family with 6 children (out of 7) affected by a spondylo-ocular syndrome. Clinical features include cataract, loss of vision due to retinal detachment, facial dysmorphism, facial hypotonia, normal height with disproportional short trunk, immobile spine with thorakal kyphosis and reduced lumbal lordosis. On ophthalmological examination of the index patient, a dense cataract and complete retinal detachment could be detected on the right eye. On the left eye, an absent lens nucleus was found, but no retinal detachment. On radiological examination, there was generalized moderate osteoporosis; the spine showed marked platyspondyly and the bone age was advanced. On laboratory investigations, a normal excretion of amino acids, mucopolysaccharides and oligosaccharides could be found. The phenotypical spectrum observed in the 6 affected individuals was rather uniform. The karyotype was normal in all affected children. This hitherto undescribed combination of oculo-skeletal symptoms shows most resemblance with connective tissue disorders, suggesting a range of candidate genes for mutation analysis.
Haberman, Jason; Brady, Timothy F; Alvarez, George A
2015-04-01
Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. (c) 2015 APA, all rights reserved).
Harris, Bryan T; Montero, Daniel; Grant, Gerald T; Morton, Dean; Llop, Daniel R; Lin, Wei-Shao
2017-02-01
This clinical report proposes a digital workflow using 2-dimensional (2D) digital photographs, a 3D extraoral facial scan, and cone beam computed tomography (CBCT) volumetric data to create a 3D virtual patient with craniofacial hard tissue, remaining dentition (including surrounding intraoral soft tissue), and the realistic appearance of facial soft tissue at an exaggerated smile under static conditions. The 3D virtual patient was used to assist the virtual diagnostic tooth arrangement process, providing patient with a pleasing preoperative virtual smile design that harmonized with facial features. The 3D virtual patient was also used to gain patient's pretreatment approval (as a communication tool), design a prosthetically driven surgical plan for computer-guided implant surgery, and fabricate the computer-aided design and computer-aided manufacturing (CAD-CAM) interim prostheses. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Holistic Facial Composite Creation and Subsequent Video Line-up Eyewitness Identification Paradigm.
Davis, Josh P; Maigut, Andreea C; Jolliffe, Darrick; Gibson, Stuart J; Solomon, Chris J
2015-12-24
The paradigm detailed in this manuscript describes an applied experimental method based on real police investigations during which an eyewitness or victim to a crime may create from memory a holistic facial composite of the culprit with the assistance of a police operator. The aim is that the composite is recognized by someone who believes that they know the culprit. For this paradigm, participants view a culprit actor on video and following a delay, participant-witnesses construct a holistic system facial composite. Controls do not construct a composite. From a series of arrays of computer-generated, but realistic faces, the holistic system construction method primarily requires participant-witnesses to select the facial images most closely meeting their memory of the culprit. Variation between faces in successive arrays is reduced until ideally the final image possesses a close likeness to the culprit. Participant-witness directed tools can also alter facial features, configurations between features and holistic properties (e.g., age, distinctiveness, skin tone), all within a whole face context. The procedure is designed to closely match the holistic manner by which humans' process faces. On completion, based on their memory of the culprit, ratings of composite-culprit similarity are collected from the participant-witnesses. Similar ratings are collected from culprit-acquaintance assessors, as a marker of composite recognition likelihood. Following a further delay, all participants--including the controls--attempt to identify the culprit in either a culprit-present or culprit-absent video line-up, to replicate circumstances in which the police have located the correct culprit, or an innocent suspect. Data of control and participant-witness line-up outcomes are presented, demonstrating the positive influence of holistic composite construction on identification accuracy. Correlational analyses are conducted to measure the relationship between assessor and participant-witness composite-culprit similarity ratings, delay, identification accuracy, and confidence to examine which factors influence video line-up outcomes.
Holistic Facial Composite Creation and Subsequent Video Line-up Eyewitness Identification Paradigm
Davis, Josh P.; Maigut, Andreea C.; Jolliffe, Darrick; Gibson, Stuart J.; Solomon, Chris J.
2015-01-01
The paradigm detailed in this manuscript describes an applied experimental method based on real police investigations during which an eyewitness or victim to a crime may create from memory a holistic facial composite of the culprit with the assistance of a police operator. The aim is that the composite is recognized by someone who believes that they know the culprit. For this paradigm, participants view a culprit actor on video and following a delay, participant-witnesses construct a holistic system facial composite. Controls do not construct a composite. From a series of arrays of computer-generated, but realistic faces, the holistic system construction method primarily requires participant-witnesses to select the facial images most closely meeting their memory of the culprit. Variation between faces in successive arrays is reduced until ideally the final image possesses a close likeness to the culprit. Participant-witness directed tools can also alter facial features, configurations between features and holistic properties (e.g., age, distinctiveness, skin tone), all within a whole face context. The procedure is designed to closely match the holistic manner by which humans’ process faces. On completion, based on their memory of the culprit, ratings of composite-culprit similarity are collected from the participant-witnesses. Similar ratings are collected from culprit-acquaintance assessors, as a marker of composite recognition likelihood. Following a further delay, all participants — including the controls — attempt to identify the culprit in either a culprit-present or culprit-absent video line-up, to replicate circumstances in which the police have located the correct culprit, or an innocent suspect. Data of control and participant-witness line-up outcomes are presented, demonstrating the positive influence of holistic composite construction on identification accuracy. Correlational analyses are conducted to measure the relationship between assessor and participant-witness composite-culprit similarity ratings, delay, identification accuracy, and confidence to examine which factors influence video line-up outcomes. PMID:26779673
Combat-related facial burns: analysis of strategic pitfalls.
Johnson, Benjamin W; Madson, Andrew Q; Bong-Thakur, Sarah; Tucker, David; Hale, Robert G; Chan, Rodney K
2015-01-01
Burns constitute approximately 10% of all combat-related injuries to the head and neck region. We postulated that the combat environment presents unique challenges not commonly encountered among civilian injuries. The purpose of the present study was to determine the features commonly seen among combat facial burns that will result in therapeutic challenges and might contribute to undesired outcomes. The present study was a retrospective study performed using a query of the Burn Registry at the US Army Institute of Surgical Research Burn Center for all active duty facial burn admissions from October 2001 to February 2011. The demographic data, total body surface area of the burn, facial region body surface area involvement, and dates of injury, first operation, and first facial operation were tabulated and compared. A subset analysis of severe facial burns, defined by a greater than 7% facial region body surface area, was performed with a thorough medical record review to determine the presence of associated injuries. Of all the military burn injuries, 67.1% (n = 558) involved the face. Of these, 81.3% (n = 454) were combat related. The combat facial burns had a mean total body surface area of 21.4% and a mean facial region body surface area of 3.2%. The interval from the date of the injury to the first operative encounter was 6.6 ± 0.8 days and was 19.8 ± 2.0 days to the first facial operation. A subset analysis of the severe facial burns revealed that the first facial operation and the definitive coverage operation was performed at 13.45 ± 2.6 days and 31.9 ± 4.1 days after the injury, respectively. The mortality rate for this subset of patients was 32% (n = 10), with a high rate of associated inhalational injuries (61%, n = 19), limb amputations (29%, n = 9), and facial allograft usage (48%, n = 15) and a mean facial autograft thickness of 10.5/1,000th in. Combat-related facial burns present multiple challenges, which can contribute to suboptimal long-term outcomes. These challenges include prolonged transport to the burn center, delayed initial intervention and definitive coverage, and a lack of available high-quality color-matched donor skin. These gaps all highlight the need for novel anti-inflammatory and skin replacement strategies to more adequately address these unique combat-related obstacles. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. All rights reserved.
Face verification system for Android mobile devices using histogram based features
NASA Astrophysics Data System (ADS)
Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu
2016-07-01
This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.
Fusco, Carmela; Micale, Lucia; Augello, Bartolomeo; Teresa Pellico, Maria; Menghini, Deny; Alfieri, Paolo; Cristina Digilio, Maria; Mandriani, Barbara; Carella, Massimo; Palumbo, Orazio; Vicari, Stefano; Merla, Giuseppe
2014-01-01
Williams Beuren syndrome (WBS) is a multisystemic disorder caused by a hemizygous deletion of 1.5 Mb on chromosome 7q11.23 spanning 28 genes. A few patients with larger and smaller WBS deletion have been reported. They show clinical features that vary between isolated SVAS to the full spectrum of WBS phenotype, associated with epilepsy or autism spectrum behavior. Here we describe four patients with atypical WBS 7q11.23 deletions. Two carry ~3.5 Mb larger deletion towards the telomere that includes Huntingtin-interacting protein 1 (HIP1) and tyrosine 3-monooxygenase/tryptophan 5-monooxigenase activation protein gamma (YWHAG) genes. Other two carry a shorter deletion of ~1.2 Mb at centromeric side that excludes the distal WBS genes BAZ1B and FZD9. Along with previously reported cases, genotype-phenotype correlation in the patients described here further suggests that haploinsufficiency of HIP1 and YWHAG might cause the severe neurological and neuropsychological deficits including epilepsy and autistic traits, and that the preservation of BAZ1B and FZD9 genes may be related to mild facial features and moderate neuropsychological deficits. This report highlights the importance to characterize additional patients with 7q11.23 atypical deletions comparing neuropsychological and clinical features between these individuals to shed light on the pathogenic role of genes within and flanking the WBS region.
Automatic recognition of emotions from facial expressions
NASA Astrophysics Data System (ADS)
Xue, Henry; Gertner, Izidor
2014-06-01
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Bruxism in craniocervical dystonia: a prospective study.
Borie, Laetitia; Langbour, Nicolas; Guehl, Dominique; Burbaud, Pierre; Ella, Bruno
2016-09-01
Bruxism pathophysiology remains unclear, and its occurrence has been poorly investigated in movement disorders. The aim of this study was to compare the frequency of bruxism in patients with craniocervical dystonia vs. normal controls and to determine its associated clinical features. This is a prospective-control study. A total of 114 dystonic subjects (45 facial dystonia, 69 cervical dystonia) and 182 controls were included. Bruxism was diagnosed using a hetero-questionnaire and a clinical examination performed by trained dentists. Occurrence of bruxism was compared between the different study populations. A binomial logistic regression analysis was used to determine which clinical features influenced bruxism occurrence in each population. The frequency of bruxism was significantly higher in the dystonic group than in normal controls but there was no difference between facial and cervical dystonia. It was also higher in women than in men. Bruxism features were similar between normal controls and dystonic patients except for a higher score of temporomandibular jaw pain in the dystonic group. The higher frequency of bruxism in dystonic patients suggests that bruxism is increased in patients with basal ganglia dysfunction but that its nature does not differ from that seen in bruxers from the normal population.
Evolution of middle-late Pleistocene human cranio-facial form: a 3-D approach.
Harvati, Katerina; Hublin, Jean-Jacques; Gunz, Philipp
2010-11-01
The classification and phylogenetic relationships of the middle Pleistocene human fossil record remains one of the most intractable problems in paleoanthropology. Several authors have noted broad resemblances between European and African fossils from this period, suggesting a single taxon ancestral to both modern humans and Neanderthals. Others point out 'incipient' Neanderthal features in the morphology of the European sample and have argued for their inclusion in the Neanderthal lineage exclusively, following a model of accretionary evolution of Neanderthals. We approach these questions using geometric morphometric methods which allow the intuitive visualization and quantification of features previously described qualitatively. We apply these techniques to evaluate proposed cranio-facial 'incipient' facial, vault, and basicranial traits in a middle-late Pleistocene European hominin sample when compared to a sample of the same time depth from Africa. Some of the features examined followed the predictions of the accretion model and relate the middle Pleistocene European material to the later Neanderthals. However, although our analysis showed a clear separation between Neanderthals and early/recent modern humans and morphological proximity between European specimens from OIS 7 to 3, it also shows that the European hominins from the first half of the middle Pleistocene still shared most of their cranio-facial architecture with their African contemporaries. Copyright © 2010 Elsevier Ltd. All rights reserved.
Warren, Richard J; Aston, Sherrell J; Mendelson, Bryan C
2011-12-01
After reading this article, the participant should be able to: 1. Identify and describe the anatomy of and changes to the aging face, including changes in bone mass and structure and changes to the skin, tissue, and muscles. 2. Assess each individual's unique anatomy before embarking on face-lift surgery and incorporate various surgical techniques, including fat grafting and other corrective procedures in addition to shifting existing fat to a higher position on the face, into discussions with patients. 3. Identify risk factors and potential complications in prospective patients. 4. Describe the benefits and risks of various techniques. The ability to surgically rejuvenate the aging face has progressed in parallel with plastic surgeons' understanding of facial anatomy. In turn, a more clear explanation now exists for the visible changes seen in the aging face. This article and its associated video content review the current understanding of facial anatomy as it relates to facial aging. The standard face-lift techniques are explained and their various features, both good and bad, are reviewed. The objective is for surgeons to make a better aesthetic diagnosis before embarking on face-lift surgery, and to have the ability to use the appropriate technique depending on the clinical situation.
Task-irrelevant emotion facilitates face discrimination learning.
Lorenzino, Martina; Caudek, Corrado
2015-03-01
We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Coding and quantification of a facial expression for pain in lambs.
Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J
2016-11-01
Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
Facial preservation following extreme mummification: Shrunken heads.
Houlton, Tobias M R; Wilkinson, Caroline
2018-05-01
Shrunken heads are a mummification phenomenon unique to South America. Ceremonial tsantsa are ritually reduced heads from enemy victims of the Shuar, Achuar, Awajún (Aguaruna), Wampís (Huambisa), and Candoshi-Shapra cultures. Commercial shrunken heads are comparatively modern and fraudulently produced for the curio-market, often using stolen bodies from hospital mortuaries and graves. To achieve shrinkage and desiccation, heads undergo skinning, simmering (in water) and drying. Considering the intensive treatments applied, this research aims to identify how the facial structure can alter and impact identification using post-mortem depiction. Sixty-five human shrunken heads were assessed: 6 ceremonial, 36 commercial, and 23 ambiguous. Investigations included manual inspection, multi-detector computerised tomography, infrared reflectography, ultraviolet fluorescence and microscopic hair analysis. The mummification process disfigures the outer face, cheeks, nasal root and bridge form, including brow ridge, eyes, ears, mouth, and nose projection. Melanin depletion, epidermal degeneration, and any applied staining changes the natural skin complexion. Papillary and reticular dermis separation is possible. Normal hair structure (cuticle, cortex, medulla) is retained. Hair appears longer (unless cut) and more profuse following shrinkage. Significant features retained include skin defects, facial creases, hairlines and earlobe form. Hair conditions that only affect living scalps are preserved (e.g. nits, hair casts). Ear and nose cartilage helps to retain some morphological information. Commercial heads appear less distorted than ceremonial tsantsa, often presenting a definable eyebrow shape, vermillion lip shape, lip thickness (if mouth is open), philtrum form, and palpebral slit angle. Facial identification capabilities are considered limited, and only perceived possible for commercial heads. Copyright © 2018 Elsevier B.V. All rights reserved.
Gorlin-Goltz Syndrome: An Uncommon Cause of Facial Pain and Asymmetry.
Pickrell, Brent B; Nguyen, Harrison P; Buchanan, Edward P
2015-10-01
Gorlin-Goltz syndrome is an underdiagnosed autosomal dominant disorder with variable expressivity that is characterized by an increased predisposition to tumorigenesis of multiple types. The major clinical features include multiple basal cell carcinomas (BCCs) appearing in early childhood, palmar and plantar pits, odontogenic keratocysts of the oral cavity, skeletal defects, craniofacial dysmorphism, and ectopic intracranial calcification. The authors present the clinical course of a 12-year-old girl presenting with facial asymmetry and pain because of previously undiagnosed Gorlin-Goltz syndrome. Early diagnosis and attentive management by a multidisciplinary team are paramount to improving outcomes in patients with this disorder, and this report serves as a paradigm for maintaining a high clinical suspicion, which must be accompanied by an appropriate radiologic workup.
NASA Astrophysics Data System (ADS)
Song, Sutao; Huang, Yuxia; Long, Zhiying; Zhang, Jiacai; Chen, Gongxiang; Wang, Shuqing
2016-03-01
Recently, several studies have successfully applied multivariate pattern analysis methods to predict the categories of emotions. These studies are mainly focused on self-experienced emotions, such as the emotional states elicited by music or movie. In fact, most of our social interactions involve perception of emotional information from the expressions of other people, and it is an important basic skill for humans to recognize the emotional facial expressions of other people in a short time. In this study, we aimed to determine the discriminability of perceived emotional facial expressions. In a rapid event-related fMRI design, subjects were instructed to classify four categories of facial expressions (happy, disgust, angry and neutral) by pressing different buttons, and each facial expression stimulus lasted for 2s. All participants performed 5 fMRI runs. One multivariate pattern analysis method, support vector machine was trained to predict the categories of facial expressions. For feature selection, ninety masks defined from anatomical automatic labeling (AAL) atlas were firstly generated and each were treated as the input of the classifier; then, the most stable AAL areas were selected according to prediction accuracies, and comprised the final feature sets. Results showed that: for the 6 pair-wise classification conditions, the accuracy, sensitivity and specificity were all above chance prediction, among which, happy vs. neutral , angry vs. disgust achieved the lowest results. These results suggested that specific neural signatures of perceived emotional facial expressions may exist, and happy vs. neutral, angry vs. disgust might be more similar in information representation in the brain.
Cues of fatigue: effects of sleep deprivation on facial appearance.
Sundelin, Tina; Lekander, Mats; Kecklund, Göran; Van Someren, Eus J W; Olsson, Andreas; Axelsson, John
2013-09-01
To investigate the facial cues by which one recognizes that someone is sleep deprived versus not sleep deprived. Experimental laboratory study. Karolinska Institutet, Stockholm, Sweden. Forty observers (20 women, mean age 25 ± 5 y) rated 20 facial photographs with respect to fatigue, 10 facial cues, and sadness. The stimulus material consisted of 10 individuals (five women) photographed at 14:30 after normal sleep and after 31 h of sleep deprivation following a night with 5 h of sleep. Ratings of fatigue, fatigue-related cues, and sadness in facial photographs. The faces of sleep deprived individuals were perceived as having more hanging eyelids, redder eyes, more swollen eyes, darker circles under the eyes, paler skin, more wrinkles/fine lines, and more droopy corners of the mouth (effects ranging from b = +3 ± 1 to b = +15 ± 1 mm on 100-mm visual analog scales, P < 0.01). The ratings of fatigue were related to glazed eyes and to all the cues affected by sleep deprivation (P < 0.01). Ratings of rash/eczema or tense lips were not significantly affected by sleep deprivation, nor associated with judgements of fatigue. In addition, sleep-deprived individuals looked sadder than after normal sleep, and sadness was related to looking fatigued (P < 0.01). The results show that sleep deprivation affects features relating to the eyes, mouth, and skin, and that these features function as cues of sleep loss to other people. Because these facial regions are important in the communication between humans, facial cues of sleep deprivation and fatigue may carry social consequences for the sleep deprived individual in everyday life.
NASA Astrophysics Data System (ADS)
Tsagkrasoulis, Dimosthenis; Hysi, Pirro; Spector, Tim; Montana, Giovanni
2017-04-01
The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).
Real Time 3D Facial Movement Tracking Using a Monocular Camera
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
2016-01-01
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714
Real Time 3D Facial Movement Tracking Using a Monocular Camera.
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
2016-07-25
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.
Clinical features and management of facial nerve paralysis in children: analysis of 24 cases.
Cha, H E; Baek, M K; Yoon, J H; Yoon, B K; Kim, M J; Lee, J H
2010-04-01
To evaluate the causes, treatment modalities and recovery rate of paediatric facial nerve paralysis. We analysed 24 cases of paediatric facial nerve paralysis diagnosed in the otolaryngology department of Gachon University Gil Medical Center between January 2001 and June 2006. The most common cause was idiopathic palsy (16 cases, 66.7 per cent). The most common degree of facial nerve paralysis on first presentation was House-Brackmann grade IV (15 of 24 cases). All cases were treated with steroids. One of the 24 cases was also treated surgically with facial nerve decompression. Twenty-two cases (91.6 per cent) recovered to House-Brackmann grade I or II over the six-month follow-up period. Facial nerve paralysis in children can generally be successfully treated with conservative measures. However, in cases associated with trauma, radiological investigation is required for further evaluation and treatment.
Real-time speech-driven animation of expressive talking faces
NASA Astrophysics Data System (ADS)
Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli
2011-05-01
In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Wen, Yi Feng; McGrath, Colman Patrick
2017-01-01
Introduction Existing studies on facial growth were mostly cross-sectional in nature and only a limited number of facial measurements were investigated. The purposes of this study were to longitudinally investigate facial growth of Chinese in Hong Kong from 12 through 15 to 18 years of age and to compare the magnitude of growth changes between genders. Methods and findings Standardized frontal and lateral facial photographs were taken from 266 (149 females and 117 males) and 265 (145 females and 120 males) participants, respectively, at all three age levels. Linear and angular measurements, profile inclinations, and proportion indices were recorded. Statistical analyses were performed to investigate growth changes of facial features. Comparisons were made between genders in terms of the magnitude of growth changes from ages 12 to 15, 15 to 18, and 12 to 18 years. For the overall face, all linear measurements increased significantly (p < 0.05) except for height of the lower profile in females (p = 0.069) and width of the face in males (p = 0.648). In both genders, the increase in height of eye fissure was around 10% (p < 0.001). There was significant decrease in nasofrontal angle (p < 0.001) and increase in nasofacial angle (p < 0.001) in both genders and these changes were larger in males. Vermilion-total upper lip height index remained stable in females (p = 0.770) but increased in males (p = 0.020). Nasofrontal angle (effect size: 0.55) and lower vermilion contour index (effect size: 0.59) demonstrated large magnitude of gender difference in the amount of growth changes from 12 to 18 years. Conclusions Growth changes of facial features and gender differences in the magnitude of facial growth were determined. The findings may benefit different clinical specialties and other nonclinical fields where facial growth are of interest. PMID:29053713
Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems
Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho
2013-01-01
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568
NASA Astrophysics Data System (ADS)
Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin
2018-01-01
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
Facial palsy after dental procedures - Is viral reactivation responsible?
Gaudin, Robert A; Remenschneider, Aaron K; Phillips, Katie; Knipfer, Christian; Smeets, Ralf; Heiland, Max; Hadlock, Tessa A
2017-01-01
Herpes labialis viral reactivation has been reported following dental procedures, but the incidence, characteristics and outcomes of delayed peripheral facial nerve palsy following dental work is poorly understood. Herein we describe the unique features of delayed facial paresis following dental procedures. An institutional retrospective review was performed to identify patients diagnosed with delayed facial nerve palsy within 30 days of dental manipulation. Demographics, prodromal signs and symptoms, initial medical treatment and outcomes were assessed. Of 2471 patients with facial palsy, 16 (0.7%) had delayed facial paresis following ipsilateral dental procedures. Average age at presentation was 44 yrs and 56% (9/16) were female. Clinical evaluation was consistent with Bell's palsy in 14 (88%) and Ramsay-Hunt syndrome in 2 patients (12%). Patients developed facial paresis an average of 3.9 days after the dental procedure, with all individuals developing a flaccid paralysis (House Brackmann (HB) grade VI) during the acute stage. 50% of patients developed persistent facial palsy in the form of non-flaccid facial paralysis (HBIII-IV). Facial palsy, like herpes labialis, can occur in the days following dental procedures and may also be related to viral reactivation. In this small cohort, long-term facial outcomes appear worse than for spontaneous Bell's palsy. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Facial color is an efficient mechanism to visually transmit emotion
Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash
2018-01-01
Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780
Facial color is an efficient mechanism to visually transmit emotion.
Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M
2018-04-03
Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.
Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane
2013-01-01
The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees. PMID:23441232
Marur, Tania; Tuna, Yakup; Demirci, Selman
2014-01-01
Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.
de Souza, Bento Sousa; Bichara, Livia Monteiro; Guerreiro, João Farias; Quintão, Cátia Cardoso Abdo; Normando, David
2015-09-01
Indigenous people of the Xingu river present a similar tooth wear pattern, practise exclusive breast-feeding, no pacifier use, and have a large intertribal genetic distance. To revisit the etiology of dental malocclusion features considering these population characteristics. Occlusion and facial features of five semi-isolated Amazon indigenous populations (n=351) were evaluated and compared to previously published data from urban Amazon people. Malocclusion prevalence ranged from 33.8% to 66.7%. Overall this prevalence is lower when compared to urban people mainly regarding posterior crossbite. A high intertribal diversity was found. The Arara-Laranjal village had a population with a normal face profile (98%) and a high rate of normal occlusion (66.2%), while another group from the same ethnicity presented a high prevalence of malocclusion, the highest occurrence of Class III malocclusion (32.6%) and long face (34.8%). In Pat-Krô village the population had the highest prevalence of Class II malocclusion (43.9%), convex profile (38.6%), increased overjet (36.8%) and deep bite (15.8%). Another village's population, from the same ethnicity, had a high frequency of anterior open bite (22.6%) and anterior crossbite (12.9%). The highest occurrence of bi-protrusion was found in the group with the lowest prevalence of dental crowding, and vice versa. Supported by previous genetic studies and given their similar environmental conditions, the high intertribal diversity of occlusal and facial features suggests that genetic factors contribute substantially to the morphology of occlusal and facial features in the indigenous groups studied. The low prevalence of posterior crossbite in the remote indigenous populations compared with urban populations may relate to prolonged breastfeeding and an absence of pacifiers in the indigenous groups. Copyright © 2015 Elsevier Ltd. All rights reserved.
FARRI, A.; ENRICO, A.; FARRI, F.
2012-01-01
SUMMARY In 1988, diagnostic criteria for headaches were drawn up by the International Headache Society (IHS) and is divided into headaches, cranial neuralgias and facial pain. The 2nd edition of the International Classification of Headache Disorders (ICHD) was produced in 2004, and still provides a dynamic and useful instrument for clinical practice. We have examined the current IHC, which comprises 14 groups. The first four cover primary headaches, with "benign paroxysmal vertigo of childhood" being the forms of migraine of interest to otolaryngologists; groups 5 to 12 classify "secondary headaches"; group 11 is formed of "headache or facial pain attributed to disorder of cranium, neck, eyes, ears, nose, sinuses, teeth, mouth or other facial or cranial structures"; group 13, consisting of "cranial neuralgias and central causes of facial pain" is also of relevance to otolaryngology. Neither the current classification system nor the original one has a satisfactory collocation for migraineassociated vertigo. Another critical point of the classification concerns cranio-facial pain syndromes such as Sluder's neuralgia, previously included in the 1988 classification among cluster headaches, and now included in the section on "cranial neuralgias and central causes of facial pain", even though Sluder's neuralgia has not been adequately validated. As we have highlighted in our studies, there are considerable similarities between Sluder's syndrome and cluster headaches. The main features distinguishing the two are the trend to cluster over time, found only in cluster headaches, and the distribution of pain, with greater nasal manifestations in the case of Sluder's syndrome. We believe that it is better and clearer, particularly on the basis of our clinical experience and published studies, to include this nosological entity, which is clearly distinct from an otolaryngological point of view, as a variant of cluster headache. We agree with experts in the field of headaches, such as Olesen and Nappi who contributed to previous classifications, on the need for a revised classification, particularly with regards to secondary headaches. According to the current Committee on headaches, the updated version of the classification, presently under study, is due to be published soon; it is our hope that this revised version will take into account some of the above considerations. PMID:22767967
The Morphometrics of “Masculinity” in Human Faces
Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B.; Schaefer, Katrin
2015-01-01
In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features—the masculinity shape scores—were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity. PMID:25671667
Case Report: Congenital Erythroleukemia in a Premature Infant with Dysmorphic Features.
Helin, Heidi; van der Walt, Jon; Holder, Muriel; George, Simi
2016-01-01
We present a case of pure erythroleukemia, diagnosed at autopsy, in a dysmorphic premature infant who died of multiorgan failure within 24 hours of birth. Dysmorphic features included facial and limb abnormalities with long philtrum, microagnathia, downturned mouth, short neck as well as abnormal and missing nails, missing distal phalanx from the second toe, and overlapping toes. Internal findings included gross hepatomegaly and patchy hemorrhages in the liver, splenomegaly, and cardiomegaly; and subdural, intracerebral, and intraventricular hemorrhages. Histology revealed infiltration of bone marrow, kidney, heart, liver, adrenal, lung, spleen, pancreas, thyroid, testis, thymus, and placenta by pure erythroleukemia. Only 6 cases of congenital erythroleukemia have been previously reported with autopsy findings similar to those of this case. The dysmorphic features, although not fitting any specific syndrome, make this case unique. Congenital erythroleukemia and possible syndromes suggested by the dysmorphic features are discussed.
Kraft, S P; Lang, A E
1988-01-01
Blepharospasm, the most frequent feature of cranial dystonia, and hemifacial spasm are two involuntary movement disorders that affect facial muscles. The cause of blepharospasm and other forms of cranial dystonia is not known. Hemifacial spasm is usually due to compression of the seventh cranial nerve at its exit from the brain stem. Cranial dystonia may result in severe disability. Hemifacial spasm tends to be much less disabling but may cause considerable distress and embarrassment. Patients affected with these disorders are often mistakenly considered to have psychiatric problems. Although the two disorders are quite distinct pathophysiologically, therapy with botulinum toxin has proven very effective in both. We review the clinical features, proposed pathophysiologic features, differential diagnosis and treatment, including the use of botulinum toxin, of cranial dystonia and hemifacial spasm. Images Fig. 2 Fig. 3 PMID:3052771
Facial correlates of emotional behaviour in the domestic cat (Felis catus).
Bennett, Valerie; Gourkow, Nadine; Mills, Daniel S
2017-08-01
Leyhausen's (1979) work on cat behaviour and facial expressions associated with offensive and defensive behaviour is widely embraced as the standard for interpretation of agonistic behaviour in this species. However, it is a largely anecdotal description that can be easily misunderstood. Recently a facial action coding system has been developed for cats (CatFACS), similar to that used for objectively coding human facial expressions. This study reports on the use of this system to describe the relationship between behaviour and facial expressions of cats in confinement contexts without and with human interaction, in order to generate hypotheses about the relationship between these expressions and underlying emotional state. Video recordings taken of 29 cats resident in a Canadian animal shelter were analysed using 1-0 sampling of 275 4-s video clips. Observations under the two conditions were analysed descriptively using hierarchical cluster analysis for binomial data and indicated that in both situations, about half of the data clustered into three groups. An argument is presented that these largely reflect states based on varying degrees of relaxed engagement, fear and frustration. Facial actions associated with fear included blinking and half-blinking and a left head and gaze bias at lower intensities. Facial actions consistently associated with frustration included hissing, nose-licking, dropping of the jaw, the raising of the upper lip, nose wrinkling, lower lip depression, parting of the lips, mouth stretching, vocalisation and showing of the tongue. Relaxed engagement appeared to be associated with a right gaze and head turn bias. The results also indicate potential qualitative changes associated with differences in intensity in emotional expression following human intervention. The results were also compared to the classic description of "offensive and defensive moods" in cats (Leyhausen, 1979) and previous work by Gourkow et al. (2014a) on behavioural styles in cats in order to assess if these observations had replicable features noted by others. This revealed evidence of convergent validity between the methods However, the use of CatFACS revealed elements relating to vocalisation and response lateralisation, not previously reported in this literature. Copyright © 2017 Elsevier B.V. All rights reserved.
A small-world network model of facial emotion recognition.
Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto
2016-01-01
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.
Recognition of clinical characteristics for population-based surveillance of fetal alcohol syndrome.
Andrews, Jennifer G; Galindo, Maureen K; Meaney, F John; Benavides, Argelia; Mayate, Linnette; Fox, Deborah; Pettygrove, Sydney; O'Leary, Leslie; Cunniff, Christopher
2018-06-01
The diagnosis of fetal alcohol syndrome (FAS) rests on identification of characteristic facial, growth, and central nervous system (CNS) features. Public health surveillance of FAS depends on documentation of these characteristics. We evaluated if reporting of FAS characteristics is associated with the type of provider examining the child. We analyzed cases aged 7-9 years from the Fetal Alcohol Syndrome Surveillance Network II (FASSNetII). We included cases whose surveillance records included the type of provider (qualifying provider: developmental pediatrician, geneticist, neonatologist; other physician; or other provider) who evaluated the child as well as the FAS diagnostic characteristics (facial dysmorphology, CNS impairment, and/or growth deficiency) reported by the provider. A total of 345 cases were eligible for this analysis. Of these, 188 (54.5%) had adequate information on type of provider. Qualifying physicians averaged more than six reported FAS characteristics while other providers averaged less than five. Qualifying physicians reported on facial characteristics and developmental delay more frequently than other providers. Also, qualifying physicians reported on all three domains of characteristics (facial, CNS, and growth) in 97% of cases while others reported all three characteristics on two thirds of cases. Documentation in medical records during clinical evaluations for FAS is lower than optimal for cross-provider communication and surveillance purposes. Lack of documentation limits the quality and quantity of information in records that serve as a major source of data for public health surveillance systems. © 2018 Wiley Periodicals, Inc.
Kabuki syndrome: expanding the phenotype to include microphthalmia and anophthalmia.
McVeigh, Terri P; Banka, Siddharth; Reardon, William
2015-10-01
Kabuki syndrome is a rare genetic malformation syndrome that is characterized by distinct facies, structural defects and intellectual disability. Kabuki syndrome may be caused by mutations in one of two histone methyltransferase genes: KMT2D and KDM6A. We describe a male child of nonconsanguineous Irish parents presenting with multiple malformations, including bilateral extreme microphthalmia; cleft palate; congenital diaphragmatic hernia; duplex kidney; as well as facial features of Kabuki syndrome, including interrupted eyebrows and lower lid ectropion. A de-novo germline mutation in KMT2D was identified. Whole-exome sequencing failed to reveal mutations in any of the known microphthalmia/anopthalmia genes. We also identified four other patients with Kabuki syndrome and microphthalmia. We postulate that Kabuki syndrome may produce this type of ocular phenotype as a result of extensive interaction between KMT2D, WAR complex proteins and PAXIP1. Children presenting with microphthalmia/anophthalmia should be examined closely for other signs of Kabuki syndrome, especially at an age where the facial gestalt might be less readily appreciable.
Cross-Cultural Agreement in Facial Attractiveness Preferences: The Role of Ethnicity and Gender
Coetzee, Vinet; Greeff, Jaco M.; Stephen, Ian D.; Perrett, David I.
2014-01-01
Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences. PMID:24988325
Functional connectivity between amygdala and facial regions involved in recognition of facial threat
Harada, Tokiko; Ruffman, Ted; Sadato, Norihiro; Iidaka, Tetsuya
2013-01-01
The recognition of threatening faces is important for making social judgments. For example, threatening facial features of defendants could affect the decisions of jurors during a trial. Previous neuroimaging studies using faces of members of the general public have identified a pivotal role of the amygdala in perceiving threat. This functional magnetic resonance imaging study used face photographs of male prisoners who had been convicted of first-degree murder (MUR) as threatening facial stimuli. We compared the subjective ratings of MUR faces with those of control (CON) faces and examined how they were related to brain activation, particularly, the modulation of the functional connectivity between the amygdala and other brain regions. The MUR faces were perceived to be more threatening than the CON faces. The bilateral amygdala was shown to respond to both MUR and CON faces, but subtraction analysis revealed no significant difference between the two. Functional connectivity analysis indicated that the extent of connectivity between the left amygdala and the face-related regions (i.e. the superior temporal sulcus, inferior temporal gyrus and fusiform gyrus) was correlated with the subjective threat rating for the faces. We have demonstrated that the functional connectivity is modulated by vigilance for threatening facial features. PMID:22156740
Cross-cultural agreement in facial attractiveness preferences: the role of ethnicity and gender.
Coetzee, Vinet; Greeff, Jaco M; Stephen, Ian D; Perrett, David I
2014-01-01
Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences.
Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Kibayashi, Kazuhiko; Sakurada, Koichi; Uemura, Koichi
2016-02-01
Skull-photo superimposition is a technique used to identify the relationship between the skull and a photograph of a target person: and facial reconstruction reproduces antemortem facial features from an unknown human skull, or identifies the facial features of unknown human skeletal remains. These techniques are based on soft tissue thickness and the relationships between soft tissue and the skull, i.e., the position of the ear and external acoustic meatus, pupil and orbit, nose and nasal aperture, and lips and teeth. However, the ear and nose region are relatively difficult to identify because of their structure, as the soft tissues of these regions are lined with cartilage. We attempted to establish a more accurate method to determine the position of the nasal tip from the skull. We measured the height of the maxilla and mid-lower facial region in 55 Japanese men and generated a regression equation from the collected data. We obtained a result that was 2.0±0.99mm (mean±SD) distant from the true nasal tip, when applied to a validation set consisting of another 12 Japanese men. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
Tanaka, Akemi J; Cho, Megan T; Retterer, Kyle; Jones, Julie R; Nowak, Catherine; Douglas, Jessica; Jiang, Yong-Hui; McConkie-Rosell, Allyn; Schaefer, G Bradley; Kaylor, Julie; Rahman, Omar A; Telegrafi, Aida; Friedman, Bethany; Douglas, Ganka; Monaghan, Kristin G; Chung, Wendy K
2016-01-01
We identified five unrelated individuals with significant global developmental delay and intellectual disability (ID), dysmorphic facial features and frequent microcephaly, and de novo predicted loss-of-function variants in chromosome alignment maintaining phosphoprotein 1 (CHAMP1). Our findings are consistent with recently reported de novo mutations in CHAMP1 in five other individuals with similar features. CHAMP1 is a zinc finger protein involved in kinetochore-microtubule attachment and is required for regulating the proper alignment of chromosomes during metaphase in mitosis. Mutations in CHAMP1 may affect cell division and hence brain development and function, resulting in developmental delay and ID.
Gallucci, Marcello; Ricciardelli, Paola
2018-01-01
Social exclusion is a painful experience that is felt as a threat to the human need to belong and can lead to increased aggressive and anti-social behaviours, and results in emotional and cognitive numbness. Excluded individuals also seem to show an automatic tuning to positivity: they tend to increase their selective attention towards social acceptance signals. Despite these effects known in the literature, the consequences of social exclusion on social information processing still need to be explored in depth. The aim of this study was to investigate the effects of social exclusion on processing two features that are strictly bound in the appraisal of the meaning of facial expressions: gaze direction and emotional expression. In two experiments (N = 60, N = 45), participants were asked to identify gaze direction or emotional expressions from facial stimuli, in which both these features were manipulated. They performed these tasks in a four-block crossed design after being socially included or excluded using the Cyberball game. Participants’ empathy and self-reported emotions were recorded using the Empathy Quotient (EQ) and PANAS questionnaires. The Need Threat Scale and three additional questions were also used as manipulation checks in the second experiment. In both experiments, excluded participants showed to be less accurate than included participants in gaze direction discrimination. Modulatory effects of direct gaze (Experiment 1) and sad expression (Experiment 2) on the effects of social exclusion were found on response times (RTs) in the emotion recognition task. Specific differences in the reaction to social exclusion between males and females were also found in Experiment 2: excluded male participants tended to be less accurate and faster than included male participants, while excluded females showed a more accurate and slower performance than included female participants. No influence of social exclusion on PANAS or EQ scores was found. Results are discussed in the context of the importance of identifying gaze direction in appraisal theories. PMID:29617410
2005-04-08
category, feature identification, has been added to address such worn or carried objects, and facial recognition . The definitions also address commercial...Cell phone or revolver − Uniform worn by French or US or Chinese infantry − Facial recognition /identification (A particular person can be
Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.
2013-01-01
Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…
Gaze control during face exploration in schizophrenia.
Delerue, Céline; Laprévote, Vincent; Verfaillie, Karl; Boucart, Muriel
2010-10-04
Patients with schizophrenia perform worse than controls on various face perception tasks. Studies monitoring eye movements have shown reduced scan paths and a lower number of fixations to relevant facial features (eyes, nose, mouth) than to other parts. We examine whether attentional control, through instructions, modulates visual scanning in schizophrenia. Visual scan paths were monitored in 20 patients with schizophrenia and 20 controls. Participants started with a "free viewing" task followed by tasks in which they were asked to determine the gender, identify the facial expression, estimate the age, or decide whether the face was known or unknown. Temporal and spatial characteristics of scan paths were compared for each group and task. Consistent with the literature, patients with schizophrenia showed reduced attention to salient facial features in the passive viewing. However, their scan paths did not differ from that of controls when asked to determine the facial expression, the gender, the age or the familiarity of the face. The results are interpreted in terms of attentional control and cognitive flexibility. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Ontogenetic and static allometry in the human face: contrasting Khoisan and Inuit.
Freidline, Sarah E; Gunz, Philipp; Hublin, Jean-Jacques
2015-09-01
Regional differences in modern human facial features are present at birth, and ontogenetic allometry contributes to variation in adults. However, details regarding differential rates of growth and timing among regional groups are lacking. We explore ontogenetic and static allometry in a cross-sectional sample spanning Africa, Europe and North America, and evaluate tempo and mode in two regional groups with very different adult facial morphology, the Khoisan and Inuit. Semilandmark geometric morphometric methods, multivariate statistics and growth simulations were used to quantify and compare patterns of facial growth and development. Regional-specific facial morphology develops early in ontogeny. The Inuit has the most distinct morphology and exhibits heterochronic differences in development compared to other regional groups. Allometric patterns differ during early postnatal development, when significant increases in size are coupled with large amounts of shape changes. All regional groups share a common adult static allometric trajectory, which can be attributed to sexual dimorphism, and the corresponding allometric shape changes resemble developmental patterns during later ontogeny. The amount and pattern of growth and development may not be shared between regional groups, indicating that a certain degree of flexibility is allowed for in order to achieve adult size. In early postnatal development the face is less constrained compared to other parts of the cranium allowing for greater evolvability. The early development of region-specific facial features combined with heterochronic differences in timing or rate of growth, reflected in differences in facial size, suggest different patterns of postnatal growth. © 2015 Wiley Periodicals, Inc.
Tsang, Erica; Rupps, Rosemarie; McGillivray, Barbara; Eydoux, Patrice; Marra, Marco; Arbour, Laura; Langlois, Sylvie; Friedman, Jan M; Zahir, Farah R
2012-10-01
[Bonnet et al. (2010); J Med Genet 47: 377-384] recently suggested a 4q21 microdeletion syndrome with several common features, including severe intellectual disability, lack of speech, hypotonia, significant growth restriction, and distinctive facial features. Overlap of the deleted regions of 13 patients, including a patient we previously reported, delineates a critical region, with PRKG2 and RASGEF1B emerging as candidate genes. Here we provide a detailed clinical report and photographic life history of our previously reported patient. Previous case reports of this new syndrome have not described the prognosis or natural history of these patients. Copyright © 2012 Wiley Periodicals, Inc.
Facial anthropometric differences among gender, ethnicity, and age groups.
Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald
2010-06-01
The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational groups have facial features that differ significantly than those in construction. Statistically significant differences in facial anthropometric dimensions (P < 0.05) were noted between males and females, all racial/ethnic groups, and the subjects who were at least 45 years old when compared to workers between 18 and 29 years of age. These findings could be important to the design and manufacture of respirators, as well as employers responsible for supplying respiratory protective equipment to their employees.
Adapting Local Features for Face Detection in Thermal Image.
Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro
2017-11-27
A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.
Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier
2015-03-01
Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba
2014-10-01
In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.
19q13.32 microdeletion syndrome: three new cases.
Castillo, Angela; Kramer, Nancy; Schwartz, Charles E; Miles, Judith H; DuPont, Barbara R; Rosenfeld, Jill A; Graham, John M
2014-01-01
A previous report described a unique phenotype associated with an apparently de novo 732 kb 19q13.32 microdeletion, consisting of intellectual disability, facial asymmetry, ptosis, oculomotor abnormalities, orofacial clefts, cardiac defects, scoliosis and chronic constipation. We report three unrelated patients with developmental delay and dysmorphic features, who were all found to have interstitial 19q13.32 microdeletions of varying sizes. Both the previously reported patient and our Patient 1 with a larger, 1.3-Mb deletion have distinctive dysmorphic features and medical problems, allowing us to define a recognizable 19q13.32 microdeletion syndrome. Patient 1 was hypotonic and dysmorphic at birth, with aplasia of the posterior corpus callosum, bilateral ptosis, oculomotor paralysis, down-slanting palpebral fissures, facial asymmetry, submucosal cleft palate, micrognathia, wide-spaced nipples, right-sided aortic arch, hypospadias, bilateral inguinal hernias, double toenail of the left second toe, partial 2-3 toe syndactyly, kyphoscoliosis and colonic atony. Therefore, the common features of the 19q13.32 microdeletion syndrome include facial asymmetry, ptosis, oculomotor paralysis, orofacial clefting, micrognathia, kyphoscoliosis, aortic defects and colonic atony. These findings are probably related to a deletion of some combination of the 20-23 genes in common between these two patients, especially NPAS1, NAPA, ARHGAP35, SLC8A2, DHX34, MEIS3, and ZNF541. These candidate genes are expressed in the brain parenchyma, glia, heart, gastrointestinal tract and musculoskeletal system and likely play a fundamental role in the expression of this phenotype. This report delineates the phenotypic spectrum associated with the haploinsufficiency of genes found in 19q13.32. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
2014-01-01
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Face aging effect simulation model based on multilayer representation and shearlet transform
NASA Astrophysics Data System (ADS)
Li, Yuancheng; Li, Yan
2017-09-01
In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.
When is facial paralysis Bell palsy? Current diagnosis and treatment.
Ahmed, Anwar
2005-05-01
Bell palsy is largely a diagnosis of exclusion, but certain features in the history and physical examination help distinguish it from facial paralysis due to other conditions: eg, abrupt onset with complete, unilateral facial weakness at 24 to 72 hours, and, on the affected side, numbness or pain around the ear, a reduction in taste, and hypersensitivity to sounds. Corticosteroids and antivirals given within 10 days of onset have been shown to help. But Bell palsy resolves spontaneously without treatment in most patients within 6 months.
Facial movements strategically camouflage involuntary social signals of face morphology.
Gill, Daniel; Garrod, Oliver G B; Jack, Rachael E; Schyns, Philippe G
2014-05-01
Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.
11p15 duplication and 13q34 deletion with Beckwith-Wiedemann syndrome and factor VII deficiency.
Jurkiewicz, Dorota; Kugaudo, Monika; Tańska, Anna; Wawrzkiewicz-Witkowska, Angelika; Tomaszewska, Agnieszka; Kucharczyk, Marzena; Cieślikowska, Agata; Ciara, Elżbieta; Krajewska-Walasek, Małgorzata
2015-06-01
Here we report a patient with 11p15.4p15.5 duplication and 13q34 deletion presenting with Beckwith-Wiedemann syndrome (BWS) and moderate deficiency of factor VII (FVII). The duplication was initially diagnosed on methylation-sensitive multiplex ligation-dependent probe amplification. Array comparative genome hybridization confirmed its presence and indicated a 13q34 distal deletion. The patient's clinical symptoms, including developmental delay and facial dysmorphism, were typical of BWS with paternal 11p15 trisomy. Partial 13q monosomy in this patient is associated with moderate deficiency of FVII and may also overlap with a few symptoms of paternal 11p15 trisomy such as developmental delay and some facial features. To our knowledge this is the first report of 11p15.4p15.5 duplication associated with deletion of 13q34 and FVII deficiency. Moreover, this report emphasizes the importance of detailed clinical as well as molecular examinations in patients with BWS features and developmental delay. © 2015 Japan Pediatric Society.
ERIC Educational Resources Information Center
Weathers, Monica D.; Frank, Elaine M.; Spell, Leigh Ann
2002-01-01
Examined African Americans' and Whites' ability to recognize facial expressions and vocal prosody of predominantly white stimuli at three age groups (children, young adults, and adults). Race was a significant factor in interpreting facial expressions and prosodic features. Individuals from specific ethnic groups were most accurate in decoding…
Intellectual Abilities in a Large Sample of Children with Velo-Cardio-Facial Syndrome: An Update
ERIC Educational Resources Information Center
De Smedt, Bert; Devriendt, K.; Fryns, J. -P.; Vogels, A.; Gewillig, M.; Swillen, A.
2007-01-01
Background: Learning disabilities are one of the most consistently reported features in Velo-Cardio-Facial Syndrome (VCFS). Earlier reports on IQ in children with VCFS were, however, limited by small sample sizes and ascertainment biases. The aim of the present study was therefore to replicate these earlier findings and to investigate intellectual…
Facial soft tissue thickness in skeletal type I Japanese children.
Utsuno, Hajime; Kageyama, Toru; Deguchi, Toshio; Umemura, Yasunobu; Yoshino, Mineo; Nakamura, Hiroshi; Miyazawa, Hiroo; Inoue, Katsuhiro
2007-10-25
Facial reconstruction techniques used in forensic anthropology require knowledge of the facial soft tissue thickness of each race if facial features are to be reconstructed correctly. If this is inaccurate, so also will be the reconstructed face. Knowledge of differences by age and sex are also required. Therefore, when unknown human skeletal remains are found, the forensic anthropologist investigates for race, sex, and age, and for other variables of relevance. Cephalometric X-ray images of living persons can help to provide this information. They give an approximately 10% enlargement from true size and can demonstrate the relationship between soft and hard tissue. In the present study, facial soft tissue thickness in Japanese children was measured at 12 anthropological points using X-ray cephalometry in order to establish a database for facial soft tissue thickness. This study of both boys and girls, aged from 6 to 18 years, follows a previous study of Japanese female children only, and focuses on facial soft tissue thickness in only one skeletal type. Sex differences in thickness of tissue were found from 12 years of age upwards. The study provides more detailed and accurate measurements than past reports of facial soft tissue thickness, and reveals the uniqueness of the Japanese child's facial profile.
Facial Attractiveness Assessment using Illustrated Questionnairers
MESAROS, ANCA; CORNEA, DANIELA; CIOARA, LIVIU; DUDEA, DIANA; MESAROS, MICHAELA; BADEA, MINDRA
2015-01-01
Introduction. An attractive facial appearance is considered nowadays to be a decisive factor in establishing successful interactions between humans. In relation to this topic, scientific literature states that some of the facial features have more impact then others, and important authors revealed that certain proportions between different anthropometrical landmarks are mandatory for an attractive facial appearance. Aim. Our study aims to assess if certain facial features count differently in people’s opinion while assessing facial attractiveness in correlation with factors such as age, gender, specific training and culture. Material and methods. A 5-item multiple choice illustrated questionnaire was presented to 236 dental students. The Photoshop CS3 software was used in order to obtain the sets of images for the illustrated questions. The original image was handpicked from the internet by a panel of young dentists from a series of 15 pictures of people considered to have attractive faces. For each of the questions, the images presented were simulating deviations from the ideally symmetric and proportionate face. The sets of images consisted in multiple variations of deviations mixed with the original photo. Junior and sophomore year students from our dental medical school, having different nationalities were required to participate in our questionnaire. Simple descriptive statistics were used to interpret the data. Results. Assessing the results obtained from the questionnaire it was observed that a majority of students considered as unattractive the overdevelopment of the lower third, while the initial image with perfect symmetry and proportion was considered as the most attractive by only 38.9% of the subjects. Likewise, regarding the symmetry 36.86% considered unattractive the canting of the inter-commissural line. The interviewed subjects considered that for a face to be attractive it needs to have harmonious proportions between the different facial elements. Conclusions. Considering an evaluation of facial attractiveness it is important to keep in mind that such assessment is subjective and influenced by multiple factors, among which the most important are cultural background and specific training. PMID:26528052
Vargo, J K; Gladwin, M; Ngan, P
2003-02-01
To compare the judgments of facial esthetics, defects and treatment needs between laypersons and professionals (orthodontists and oral surgeons) as predictors of patient's motivation for orthognathic surgery. Two panels of expert and naïve raters were asked to evaluate photographs of orthognathic surgery patients for facial esthetics, defects and treatment needs. Results were correlated with patients' motivation for surgery. Fifty-seven patients (37 females and 20 males) with a mean age of 26.0 +/- 6.7 years were interviewed prior to orthognathic surgery treatment. Three color photographs of each patient were evaluated by a panel of 14 experts and panel of 18 laypersons. Each panel of raters were asked to evaluate the facial morphology, facial attractiveness and recommend surgical treatment (independent variables). The dependent variable was the patient's motivation for orthognathic surgery. Outcome measure--Reliability of raters were analyzed using an unweighted Kappa coefficient and a Cronbach alpha coefficient. Correlations and regression analyses were used to quantify the relationship between variables. Expert raters provided reliable ratings of certain morphological features such as excessive gingival display and classification of mandibular facial form and position. Based on the facial photographs both expert and naïve raters agreed on facial attractiveness of patients. The best predictors of patients' motivation for surgery were the naïve profile attractiveness rating and the patients' expected change in self-consciousness. Expert raters provide more reliable ratings on certain morphologic features. However, the layperson's profile attractiveness rating and the patients' expected change in self-consciousness were the best predictors for patients' motivation for surgery. These data suggest that patients' motives for treatment are not necessarily related to objectively determined need. Patients' decision to seek treatment was more correlated to laypersons' rating of attractiveness because they see what other laypersons see, and are directly or indirectly affected by others reactions to their appearance. These findings may provide useful information for clinicians in counseling patients who seek orthognathic surgery.
Cues of Fatigue: Effects of Sleep Deprivation on Facial Appearance
Sundelin, Tina; Lekander, Mats; Kecklund, Göran; Van Someren, Eus J. W.; Olsson, Andreas; Axelsson, John
2013-01-01
Study Objective: To investigate the facial cues by which one recognizes that someone is sleep deprived versus not sleep deprived. Design: Experimental laboratory study. Setting: Karolinska Institutet, Stockholm, Sweden. Participants: Forty observers (20 women, mean age 25 ± 5 y) rated 20 facial photographs with respect to fatigue, 10 facial cues, and sadness. The stimulus material consisted of 10 individuals (five women) photographed at 14:30 after normal sleep and after 31 h of sleep deprivation following a night with 5 h of sleep. Measurements: Ratings of fatigue, fatigue-related cues, and sadness in facial photographs. Results: The faces of sleep deprived individuals were perceived as having more hanging eyelids, redder eyes, more swollen eyes, darker circles under the eyes, paler skin, more wrinkles/fine lines, and more droopy corners of the mouth (effects ranging from b = +3 ± 1 to b = +15 ± 1 mm on 100-mm visual analog scales, P < 0.01). The ratings of fatigue were related to glazed eyes and to all the cues affected by sleep deprivation (P < 0.01). Ratings of rash/eczema or tense lips were not significantly affected by sleep deprivation, nor associated with judgements of fatigue. In addition, sleep-deprived individuals looked sadder than after normal sleep, and sadness was related to looking fatigued (P < 0.01). Conclusions: The results show that sleep deprivation affects features relating to the eyes, mouth, and skin, and that these features function as cues of sleep loss to other people. Because these facial regions are important in the communication between humans, facial cues of sleep deprivation and fatigue may carry social consequences for the sleep deprived individual in everyday life. Citation: Sundelin T; Lekander M; Kecklund G; Van Someren EJW; Olsson A; Axelsson J. Cues of fatigue: effects of sleep deprivation on facial appearance. SLEEP 2013;36(9):1355-1360. PMID:23997369
Distinct facial processing in schizophrenia and schizoaffective disorders
Chen, Yue; Cataldo, Andrea; Norton, Daniel J; Ongur, Dost
2011-01-01
Although schizophrenia and schizoaffective disorders have both similar and differing clinical features, it is not well understood whether similar or differing pathophysiological processes mediate patients’ cognitive functions. Using psychophysical methods, this study compared the performances of schizophrenia (SZ) patients, patients with schizoaffective disorder (SA), and a healthy control group in two face-related cognitive tasks: emotion discrimination, which tested perception of facial affect, and identity discrimination, which tested perception of non-affective facial features. Compared to healthy controls, SZ patients, but not SA patients, exhibited deficient performance in both fear and happiness discrimination, as well as identity discrimination. SZ patients, but not SA patients, also showed impaired performance in a theory-of-mind task for which emotional expressions are identified based upon the eye regions of face images. This pattern of results suggests distinct processing of face information in schizophrenia and schizoaffective disorders. PMID:21868199
Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman
2014-01-01
It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to US emotion concepts embedded in experiments. Since such cues are standard feature in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the US and the Himba ethnic group sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed “universal” pattern, whereas US participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed “universal” pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts. PMID:24708506
Intelligent Facial Recognition Systems: Technology advancements for security applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, C.L.
1993-07-01
Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g.,more » fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.« less
Wang, Shan; Eccleston, Christopher; Keogh, Edmund
2017-11-01
Spatial frequency (SF) information contributes to the recognition of facial expressions, including pain. Low-SF encodes facial configuration and structure and often dominates over high-SF information, which encodes fine details in facial features. This low-SF preference has not been investigated within the context of pain. In this study, we investigated whether perpetual preference differences exist for low-SF and high-SF pain information. A novel hybrid expression paradigm was used in which 2 different expressions, one containing low-SF information and the other high-SF information, were combined in a facial hybrid. Participants are instructed to identify the core expression contained within the hybrid, allowing for the measurement of SF information preference. Three experiments were conducted (46 participants in each) that varied the expressions within the hybrid faces: respectively pain-neutral, pain-fear, and pain-happiness. In order to measure the temporal aspects of image processing, each hybrid image was presented for 33, 67, 150, and 300 ms. As expected, identification of pain and other expressions was dominated by low-SF information across the 3 experiments. The low-SF preference was largest when the presentation of hybrid faces was brief and reduced as the presentation duration increased. A sex difference was also found in experiment 1. For women, the low-SF preference was dampened by high-SF pain information, when viewing low-SF neutral expressions. These results not only confirm the role that SF information has in the recognition of pain in facial expressions but suggests that in some situations, there may be sex differences in how pain is communicated.
Stability of Facial Affective Expressions in Schizophrenia
Fatouros-Bergman, H.; Spang, J.; Merten, J.; Preisler, G.; Werbart, A.
2012-01-01
Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS). In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature. PMID:22966449
Twin infant with lymphatic dysplasia diagnosed with Noonan syndrome by molecular genetic testing.
Mathur, Deepan; Somashekar, Santhosh; Navarrete, Cristina; Rodriguez, Maria M
2014-08-01
Noonan Syndrome is an autosomal dominant disorder characterized by short stature, congenital heart defects, developmental delay, dysmorphic facial features and occasional lymphatic dysplasias. The features of Noonan Syndrome change with age and have variable expression. The diagnosis has historically been based on clinical grounds. We describe a child that was born with congenital refractory chylothorax and subcutaneous edema suspected to be secondary to pulmonary lymphangiectasis. The infant died of respiratory failure and anasarca at 80 days. The autopsy confirmed lymphatic dysplasia in lungs and mesentery. The baby had no dysmorphic facial features and was diagnosed postmortem with Noonan syndrome by genomic DNA sequence analysis as he had a heterozygous mutation for G503R in the PTPN11 gene.
Dentomaxillofacial characteristics of ectodermal dysplasia.
Nakayama, Yumiko; Baba, Yoshiyuki; Tsuji, Michiko; Fukuoka, Hiroki; Ogawa, Takuya; Ohkuma, Mizue; Moriyama, Keiji
2015-02-01
The aim of this retrospective hospital-based study was to elucidate the dentomaxillofacial characteristics of ectodermal dysplasia. Six Japanese individuals (one male and five female; age range, 12.7-27.2 years) underwent comprehensive examinations, including history recording, cephalometric analysis, panoramic radiography, and analysis of dental models. All the subjects had two or more major manifestations for clinical diagnosis of ectodermal dysplasia (e.g., defects of hair, teeth, nails, and sweat glands). They presented hypodontia (mean number of missing teeth, 9.5; range, 5-14), especially in the premolar region, and enamel dysplasia. Five subjects had bilateral molar occlusion, whereas one subject had unilateral molar occlusion. The common skeletal features were small facial height, maxillary hypoplasia, counterclockwise rotation of the mandible, and mandibular protrusion. Interestingly, the maxillary first molars were located in higher positions and the upper anterior facial height was smaller than the Japanese norm. The results suggest that vertical and anteroposterior maxillary growth retardation, rather than lack of occlusal support due to hypodontia, leads to reduced anterior facial height in individuals with ectodermal dysplasia. © 2014 Japanese Teratology Society.
Planning Ahead: Influence of Figure Orientation on Size of Head in Children's Drawings of a Man.
ERIC Educational Resources Information Center
Willatts, Peter; Dougal, Shonagh
In an investigation of causes of the disproportionate relation between head and body in children's drawings of the human figure, 160 children of 3-10 years of age produced drawings of a man viewed from the front and from the back. It was expected that if planning to include facial features increased the size of the head children drew, then heads…
Newbury-Ecob, R
1998-01-01
Atelosteogenesis type 2 (AO2) (MIM 256050) is a neonatally lethal chondrodysplasia characterised by severe limb shortening and deficient ossification of parts of the skeleton. Other features include facial dysmorphism, cleft palate, talipes, and abducted thumbs and toes. Phenotypic overlap with non-lethal diastrophic dysplasia (DTD) suggested a common aetiology and it has recently been confirmed that both syndromes result from mutations in the DTDST (diastrophic dysplasia sulphate transporter) gene. Images PMID:9475095
Humor drawings evoked temporal and spectral EEG processes
Kuo, Hsien-Chu; Chuang, Shang-Wen
2017-01-01
Abstract The study aimed to explore the humor processing elicited through the manipulation of artistic drawings. Using the Comprehension–Elaboration Theory of humor as the main research background, the experiment manipulated the head portraits of celebrities based on the independent variables of facial deformation (large/small) and addition of affective features (positive/negative). A 64-channel electroencephalography was recorded in 30 participants while viewing the incongruous drawings of celebrities. The electroencephalography temporal and spectral responses were measured during the three stages of humor which included incongruity detection, incongruity comprehension and elaboration of humor. Analysis of event-related potentials indicated that for humorous vs non-humorous drawings, facial deformation and the addition of affective features significantly affected the degree of humor elicited, specifically: large > small deformation; negative > positive affective features. The N170, N270, N400, N600-800 and N900-1200 components showed significant differences, particularly in the right prefrontal and frontal regions. Analysis of event-related spectral perturbation showed significant differences in the theta band evoked in the anterior cingulate cortex, parietal region and posterior cingulate cortex; and in the alpha and beta bands in the motor areas. These regions are involved in emotional processing, memory retrieval, and laughter and feelings of amusement induced by elaboration of the situation. PMID:28402573
Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition
NASA Astrophysics Data System (ADS)
Kim, Jonghwa; André, Elisabeth
This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.
Kariminejad, Ariana; Ajeawung, Norbert Fonya; Bozorgmehr, Bita; Dionne-Laporte, Alexandre; Molidperee, Sirinart; Najafi, Kimia; Gibbs, Richard A; Lee, Brendan H; Hennekam, Raoul C; Campeau, Philippe M
2017-04-01
Kaufman oculo-cerebro-facial syndrome (KOS) is caused by recessive UBE3B mutations and presents with microcephaly, ocular abnormalities, distinctive facial morphology, low cholesterol levels and intellectual disability. We describe a child with microcephaly, brachycephaly, hearing loss, ptosis, blepharophimosis, hypertelorism, cleft palate, multiple renal cysts, absent nails, small or absent terminal phalanges, absent speech and intellectual disability. Syndromes that were initially considered include DOORS syndrome, Coffin-Siris syndrome and Dubowitz syndrome. Clinical investigations coupled with karyotype analysis, array-comparative genomic hybridization, exome and Sanger sequencing were performed to characterize the condition in this child. Sanger sequencing was negative for the DOORS syndrome gene TBC1D24 but exome sequencing identified a homozygous deletion in UBE3B (NM_183415:c.3139_3141del, p.1047_1047del) located within the terminal portion of the HECT domain. This finding coupled with the presence of characteristic features such as brachycephaly, ptosis, blepharophimosis, hypertelorism, short palpebral fissures, cleft palate and developmental delay allowed us to make a diagnosis of KOS. In conclusion, our findings highlight the importance of considering KOS as a differential diagnosis for patients under evaluation for DOORS syndrome and expand the phenotype of KOS to include small or absent terminal phalanges, nails, and the presence of hallux varus and multicystic dysplastic kidneys.
Reconstruction of facial nerve injuries in children.
Fattah, Adel; Borschel, Gregory H; Zuker, Ron M
2011-05-01
Facial nerve trauma is uncommon in children, and many spontaneously recover some function; nonetheless, loss of facial nerve activity leads to functional impairment of ocular and oral sphincters and nasal orifice. In many cases, the impediment posed by facial asymmetry and reduced mimetic function more significantly affects the child's psychosocial interactions. As such, reconstruction of the facial nerve affords great benefits in quality of life. The therapeutic strategy is dependent on numerous factors, including the cause of facial nerve injury, the deficit, the prognosis for recovery, and the time elapsed since the injury. The options for treatment include a diverse range of surgical techniques including static lifts and slings, nerve repairs, nerve grafts and nerve transfers, regional, and microvascular free muscle transfer. We review our strategies for addressing facial nerve injuries in children.
Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp
2017-01-01
Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3-10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception.
Van Rheenen, Tamsyn E; Joshua, Nicole; Castle, David J; Rossell, Susan L
2017-03-01
Emotion recognition impairments have been demonstrated in schizophrenia (Sz), but are less consistent and lesser in magnitude in bipolar disorder (BD). This may be related to the extent to which different face processing strategies are engaged during emotion recognition in each of these disorders. We recently showed that Sz patients had impairments in the use of both featural and configural face processing strategies, whereas BD patients were impaired only in the use of the latter. Here we examine the influence that these impairments have on facial emotion recognition in these cohorts. Twenty-eight individuals with Sz, 28 individuals with BD, and 28 healthy controls completed a facial emotion labeling task with two conditions designed to separate the use of featural and configural face processing strategies; part-based and whole-face emotion recognition. Sz patients performed worse than controls on both conditions, and worse than BD patients on the whole-face condition. BD patients performed worse than controls on the whole-face condition only. Configural processing deficits appear to influence the recognition of facial emotions in BD, whereas both configural and featural processing abnormalities impair emotion recognition in Sz. This may explain discrepancies in the profiles of emotion recognition between the disorders. (JINS, 2017, 23, 287-291).
Colombi, M; Dordoni, C; Venturini, M; Ciaccio, C; Morlino, S; Chiarelli, N; Zanca, A; Calzavara-Pinton, P; Zoppi, N; Castori, M; Ritelli, M
2017-12-01
Classical Ehlers-Danlos syndrome (cEDS) is characterized by marked cutaneous involvement, according to the Villefranche nosology and its 2017 revision. However, the diagnostic flow-chart that prompts molecular testing is still based on experts' opinion rather than systematic published data. Here we report on 62 molecularly characterized cEDS patients with focus on skin, mucosal, facial, and articular manifestations. The major and minor Villefranche criteria, additional 11 mucocutaneous signs and 15 facial dysmorphic traits were ascertained and feature rates compared by sex and age. In our cohort, we did not observe any mandatory clinical sign. Skin hyperextensibility plus atrophic scars was the most frequent combination, whereas generalized joint hypermobility according to the Beighton score decreased with age. Skin was more commonly hyperextensible on elbows, neck, and knees. The sites more frequently affected by abnormal atrophic scarring were knees, face (especially forehead), pretibial area, and elbows. Facial dysmorphism commonly affected midface/orbital areas with epicanthal folds and infraorbital creases more commonly observed in young patients. Our findings suggest that the combination of ≥1 eye dysmorphism and facial/forehead scars may support the diagnosis in children. Minor acquired traits, such as molluscoid pseudotumors, subcutaneous spheroids, and signs of premature skin aging are equally useful in adults. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Weaver syndrome and EZH2 mutations: Clarifying the clinical phenotype.
Tatton-Brown, Katrina; Murray, Anne; Hanks, Sandra; Douglas, Jenny; Armstrong, Ruth; Banka, Siddharth; Bird, Lynne M; Clericuzio, Carol L; Cormier-Daire, Valerie; Cushing, Tom; Flinter, Frances; Jacquemont, Marie-Line; Joss, Shelagh; Kinning, Esther; Lynch, Sally Ann; Magee, Alex; McConnell, Vivienne; Medeira, Ana; Ozono, Keiichi; Patton, Michael; Rankin, Julia; Shears, Debbie; Simon, Marleen; Splitt, Miranda; Strenger, Volker; Stuurman, Kyra; Taylor, Clare; Titheradge, Hannah; Van Maldergem, Lionel; Temple, I Karen; Cole, Trevor; Seal, Sheila; Rahman, Nazneen
2013-12-01
Weaver syndrome, first described in 1974, is characterized by tall stature, a typical facial appearance, and variable intellectual disability. In 2011, mutations in the histone methyltransferase, EZH2, were shown to cause Weaver syndrome. To date, we have identified 48 individuals with EZH2 mutations. The mutations were primarily missense mutations occurring throughout the gene, with some clustering in the SET domain (12/48). Truncating mutations were uncommon (4/48) and only identified in the final exon, after the SET domain. Through analyses of clinical data and facial photographs of EZH2 mutation-positive individuals, we have shown that the facial features can be subtle and the clinical diagnosis of Weaver syndrome is thus challenging, especially in older individuals. However, tall stature is very common, reported in >90% of affected individuals. Intellectual disability is also common, present in ~80%, but is highly variable and frequently mild. Additional clinical features which may help in stratifying individuals to EZH2 mutation testing include camptodactyly, soft, doughy skin, umbilical hernia, and a low, hoarse cry. Considerable phenotypic overlap between Sotos and Weaver syndromes is also evident. The identification of an EZH2 mutation can therefore provide an objective means of confirming a subtle presentation of Weaver syndrome and/or distinguishing Weaver and Sotos syndromes. As mutation testing becomes increasingly accessible and larger numbers of EZH2 mutation-positive individuals are identified, knowledge of the clinical spectrum and prognostic implications of EZH2 mutations should improve. © 2013 Wiley Periodicals, Inc.
Infant Expressions in an Approach/Withdrawal Framework
Sullivan, Margaret Wolan
2014-01-01
Since the introduction of empirical methods for studying facial expression, the interpretation of infant facial expressions has generated much debate. The premise of this paper is that action tendencies of approach and withdrawal constitute a core organizational feature of emotion in humans, promoting coherence of behavior, facial signaling and physiological responses. The approach/withdrawal framework can provide a taxonomy of contexts and the neurobehavioral framework for the systematic, empirical study of individual differences in expression, physiology, and behavior within individuals as well as across contexts over time. By adopting this framework in developmental work on basic emotion processes, it may be possible to better understand the behavioral principles governing facial displays, and how individual differences in them are related to physiology and behavior, function in context. PMID:25412273
Self-Relevance Appraisal Influences Facial Reactions to Emotional Body Expressions
Grèzes, Julie; Philip, Léonor; Chadwick, Michèle; Dezecache, Guillaume; Soussignan, Robert; Conty, Laurence
2013-01-01
People display facial reactions when exposed to others' emotional expressions, but exactly what mechanism mediates these facial reactions remains a debated issue. In this study, we manipulated two critical perceptual features that contribute to determining the significance of others' emotional expressions: the direction of attention (toward or away from the observer) and the intensity of the emotional display. Electromyographic activity over the corrugator muscle was recorded while participants observed videos of neutral to angry body expressions. Self-directed bodies induced greater corrugator activity than other-directed bodies; additionally corrugator activity was only influenced by the intensity of anger expresssed by self-directed bodies. These data support the hypothesis that rapid facial reactions are the outcome of self-relevant emotional processing. PMID:23405230
Gernhardt, Ariane; Rübeling, Hartmut; Keller, Heidi
2015-01-01
This study investigated tadpole self-drawings from 183 three- to six-year-old children living in seven cultural groups, representing three ecosocial contexts. Based on assumed general production principles, the influence of cultural norms and values upon specific characteristics of the tadpole drawings was examined. The results demonstrated that children from all cultural groups realized the body-proportion effect in the self-drawings, indicating universal production principles. However, children differed in single drawing characteristics, depending on the specific ecosocial context. Children from Western and non-Western urban educated contexts drew themselves rather tall, with many facial features, and preferred smiling facial expressions, while children from rural traditional contexts depicted themselves significantly smaller, with less facial details, and neutral facial expressions.
Estimation of human emotions using thermal facial information
NASA Astrophysics Data System (ADS)
Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac
2014-01-01
In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.
Overview of pediatric peripheral facial nerve paralysis: analysis of 40 patients.
Özkale, Yasemin; Erol, İlknur; Saygı, Semra; Yılmaz, İsmail
2015-02-01
Peripheral facial nerve paralysis in children might be an alarming sign of serious disease such as malignancy, systemic disease, congenital anomalies, trauma, infection, middle ear surgery, and hypertension. The cases of 40 consecutive children and adolescents who were diagnosed with peripheral facial nerve paralysis at Baskent University Adana Hospital Pediatrics and Pediatric Neurology Unit between January 2010 and January 2013 were retrospectively evaluated. We determined that the most common cause was Bell palsy, followed by infection, tumor lesion, and suspected chemotherapy toxicity. We noted that younger patients had generally poorer outcome than older patients regardless of disease etiology. Peripheral facial nerve paralysis has been reported in many countries in America and Europe; however, knowledge about its clinical features, microbiology, neuroimaging, and treatment in Turkey is incomplete. The present study demonstrated that Bell palsy and infection were the most common etiologies of peripheral facial nerve paralysis. © The Author(s) 2014.
Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform
NASA Astrophysics Data System (ADS)
Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka
We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.
A novel homozygous HOXB1 mutation in a Turkish family with hereditary congenital facial paresis.
Sahin, Yavuz; Güngör, Olcay; Ayaz, Akif; Güngör, Gülay; Sahin, Bedia; Yaykasli, Kursad; Ceylaner, Serdar
2017-02-01
Hereditary congenital facial paresis (HCFP) is characterized by isolated dysfunction of the facial nerve (CN VII) due to congenital cranial dysinnervation disorders. HCFP has genetic heterogeneity and HOXB1 is the first identified gene. We report the clinical, radiologic and molecular investigations of three patients admitted for HCFP in a large consanguineous Turkish family. High-throughput sequencing and Sanger sequencing of all patients revealed a novel homozygous mutation p.Arg230Trp (c.688C>T) within the HOXB1 gene. The report of the mutation brings the total number of HOXB1 mutations identified in HCFP to four. The results of this study emphasize that in individuals with congenital facial palsy accompanied by hearing loss and dysmorphic facial features, HOXB1 mutation causing HCFP should be kept in mind. Copyright © 2016 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Regional early development and eruption of permanent teeth: case report.
Al Mullahi, A M; Bakathir, A; Al Jahdhami, S
2017-02-01
Early development and eruption of permanent teeth are rarely reported in scientific literature. Early eruption of permanent teeth has been reported to occur due to local factors such as trauma or dental abscesses in primary teeth, and in systemic conditions. Congenital diffuse infiltrating facial lipomatosis (CDIFL) is a rare condition that belongs to a group of lipomatosis tumours. In this disorder, the mature adipocytes invade adjacent soft and hard tissues in the facial region. Accelerated tooth eruption is one of the dental anomalies associated with CDIFL. A 3-year-old boy presented with a swelling of the lower lip localised early development and eruption of permanent teeth and dental caries involving many primary teeth. The planned treatment included biopsy of the swollen lower lip to confirm the diagnosis, surgical reduction and reconstruction of lip aesthetics. The management of the carious primary teeth included preventative and comprehensive dental care and extractions. These procedures were completed under general anaesthesia due to the child's young age and poor cooperation. The lip biopsy showed features of CDIFL such as the presence of infiltrating adipose tissue, prominent number of nerve bundles and thickened vessels. The high recurrence rate of CDIFL mandates long-term monitoring during the facial growth period of the child. Follow-up care by the paediatric dentist and maxillofacial surgeon has been required to manage all aspects of this congenital malformation. This rare disorder has many implications affecting child's facial aesthetics, psychological well being, developing occlusion and risk of dental caries. A multi-disciplinary approach is needed for management of this condition.
Human body as a set of biometric features identified by means of optoelectronics
NASA Astrophysics Data System (ADS)
Podbielska, Halina; Bauer, Joanna
2005-09-01
Human body posses many unique, singular features that are impossible to copy or forge. Nowadays, to establish and to ensure the public security requires specially designed devices and systems. Biometrics is a field of science and technology, exploiting human body characteristics for people recognition. It identifies the most characteristic and unique ones in order to design and construct systems capable to recognize people. In this paper some overview is given, presenting the achievements in biometrics. The verification and identification process is explained, along with the way of evaluation of biometric recognition systems. The most frequently human biometrics used in practice are shortly presented, including fingerprints, facial imaging (including thermal characteristic), hand geometry and iris patterns.
Developmental Changes in the Perception of Adult Facial Age
ERIC Educational Resources Information Center
Gross, Thomas F.
2007-01-01
The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2,…
Eruptive Facial Postinflammatory Lentigo: Clinical and Dermatoscopic Features.
Cabrera, Raul; Puig, Susana; Larrondo, Jorge; Castro, Alex; Valenzuela, Karen; Sabatini, Natalia
2016-11-01
The face has not been considered a common site of fixed drug eruption, and the authors lack dermatoscopic studies of this condition on the subject. The authors sought to characterize clinical and dermatoscopic features of 8 cases of an eruptive facial postinflammatory lentigo. The authors conducted a retrospective review of 8 cases with similar clinical and dermatoscopic findings seen from 2 medical centers in 2 countries during 2010-2014. A total of 8 patients (2 males and 6 females) with ages that ranged from 34 to 62 years (mean: 48) presented an abrupt onset of a single facial brown-pink macule, generally asymmetrical, with an average size of 1.9 cm. after ingestion of a nonsteroidal antiinflammatory drugs that lasted for several months. Dermatoscopy mainly showed a pseudonetwork or uniform areas of brown pigmentation, brown or blue-gray dots, red dots and/or telangiectatic vessels. In the epidermis, histopathology showed a mild hydropic degeneration and focal melanin hyperpigmentation. Melanin can be found freely in the dermis or laden in macrophages along with a mild perivascular mononuclear infiltrate. The authors describe eruptive facial postinflammatory lentigo as a new variant of a fixed drug eruption on the face.
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
2014-10-01
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their ability to judge emotion in a signed utterance is impaired (Reilly et al. in Sign Lang Stud 75:113-118, 1992). We examined the role of the face in the comprehension of emotion in sign language in a group of typically developing (TD) deaf children and in a group of deaf children with autism spectrum disorder (ASD). We replicated Reilly et al.'s (Sign Lang Stud 75:113-118, 1992) adult results in the TD deaf signing children, confirming the importance of the face in understanding emotion in sign language. The ASD group performed more poorly on the emotion recognition task than the TD children. The deaf children with ASD showed a deficit in emotion recognition during sign language processing analogous to the deficit in vocal emotion recognition that has been observed in hearing children with ASD.
Action Unit Models of Facial Expression of Emotion in the Presence of Speech
Shah, Miraj; Cooper, David G.; Cao, Houwei; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini
2014-01-01
Automatic recognition of emotion using facial expressions in the presence of speech poses a unique challenge because talking reveals clues for the affective state of the speaker but distorts the canonical expression of emotion on the face. We introduce a corpus of acted emotion expression where speech is either present (talking) or absent (silent). The corpus is uniquely suited for analysis of the interplay between the two conditions. We use a multimodal decision level fusion classifier to combine models of emotion from talking and silent faces as well as from audio to recognize five basic emotions: anger, disgust, fear, happy and sad. Our results strongly indicate that emotion prediction in the presence of speech from action unit facial features is less accurate when the person is talking. Modeling talking and silent expressions separately and fusing the two models greatly improves accuracy of prediction in the talking setting. The advantages are most pronounced when silent and talking face models are fused with predictions from audio features. In this multi-modal prediction both the combination of modalities and the separate models of talking and silent facial expression of emotion contribute to the improvement. PMID:25525561
Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J
2014-09-01
Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (P<0.001, 2-tailed independent sample t-test). Both devices may be potentially useful communication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.
The Dynamic Features of Lip Corners in Genuine and Posed Smiles
Guo, Hui; Zhang, Xiao-Hui; Liang, Jun; Yan, Wen-Jing
2018-01-01
The smile is a frequently expressed facial expression that typically conveys a positive emotional state and friendly intent. However, human beings have also learned how to fake smiles, typically by controlling the mouth to provide a genuine-looking expression. This is often accompanied by inaccuracies that can allow others to determine that the smile is false. Mouth movement is one of the most striking features of the smile, yet our understanding of its dynamic elements is still limited. The present study analyzes the dynamic features of lip corners, and considers how they differ between genuine and posed smiles. Employing computer vision techniques, we investigated elements such as the duration, intensity, speed, symmetry of the lip corners, and certain irregularities in genuine and posed smiles obtained from the UvA-NEMO Smile Database. After utilizing the facial analysis tool OpenFace, we further propose a new approach to segmenting the onset, apex, and offset phases of smiles, as well as a means of measuring irregularities and symmetry in facial expressions. We extracted these features according to 2D and 3D coordinates, and conducted an analysis. The results reveal that genuine smiles have higher values for onset, offset, apex, and total durations, as well as offset displacement, and a variable we termed Irregularity-b (the SD of the apex phase) than do posed smiles. Conversely, values tended to be lower for onset and offset Speeds, and Irregularity-a (the rate of peaks), Symmetry-a (the correlation between left and right facial movements), and Symmetry-d (differences in onset frame numbers between the left and right faces). The findings from the present study have been compared to those of previous research, and certain speculations are made. PMID:29515508
Geometric facial comparisons in speed-check photographs.
Buck, Ursula; Naether, Silvio; Kreutz, Kerstin; Thali, Michael
2011-11-01
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.
Orientation-sensitivity to facial features explains the Thatcher illusion.
Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J
2014-10-09
The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face. © 2014 ARVO.
Low, Karen J; Ansari, Morad; Abou Jamra, Rami; Clarke, Angus; El Chehadeh, Salima; FitzPatrick, David R; Greenslade, Mark; Henderson, Alex; Hurst, Jane; Keller, Kory; Kuentz, Paul; Prescott, Trine; Roessler, Franziska; Selmer, Kaja K; Schneider, Michael C; Stewart, Fiona; Tatton-Brown, Katrina; Thevenon, Julien; Vigeland, Magnus D; Vogt, Julie; Willems, Marjolaine; Zonana, Jonathan; Study, D D D; Smithson, Sarah F
2017-01-01
PUF60 encodes a nucleic acid-binding protein, a component of multimeric complexes regulating RNA splicing and transcription. In 2013, patients with microdeletions of chromosome 8q24.3 including PUF60 were found to have developmental delay, microcephaly, craniofacial, renal and cardiac defects. Very similar phenotypes have been described in six patients with variants in PUF60, suggesting that it underlies the syndrome. We report 12 additional patients with PUF60 variants who were ascertained using exome sequencing: six through the Deciphering Developmental Disorders Study and six through similar projects. Detailed phenotypic analysis of all patients was undertaken. All 12 patients had de novo heterozygous PUF60 variants on exome analysis, each confirmed by Sanger sequencing: four frameshift variants resulting in premature stop codons, three missense variants that clustered within the RNA recognition motif of PUF60 and five essential splice-site (ESS) variant. Analysis of cDNA from a fibroblast cell line derived from one of the patients with an ESS variants revealed aberrant splicing. The consistent feature was developmental delay and most patients had short stature. The phenotypic variability was striking; however, we observed similarities including spinal segmentation anomalies, congenital heart disease, ocular colobomata, hand anomalies and (in two patients) unilateral renal agenesis/horseshoe kidney. Characteristic facial features included micrognathia, a thin upper lip and long philtrum, narrow almond-shaped palpebral fissures, synophrys, flared eyebrows and facial hypertrichosis. Heterozygote loss-of-function variants in PUF60 cause a phenotype comprising growth/developmental delay and craniofacial, cardiac, renal, ocular and spinal anomalies, adding to disorders of human development resulting from aberrant RNA processing/spliceosomal function. PMID:28327570
The male beard hair and facial skin - challenges for shaving.
Maurer, M; Rietzler, M; Burghardt, R; Siebenhaar, F
2016-06-01
The challenge of shaving is to cut the beard hair as closely as possible to the skin without unwanted effects on the skin. To achieve this requires the understanding of beard hair and male facial skin biology as both, the beard hair and the male facial skin, contribute to the difficulties in obtaining an effective shave without shaving-induced skin irritation. Little information is available on the biology of beard hairs and beard hair follicles. We know that, in beard hairs, the density, thickness, stiffness, as well as the rates of elliptical shape and low emerging angle, are high and highly heterogeneous. All of this makes it challenging to cut it, and shaving techniques commonly employed to overcome these challenges include shaving with increased pressure and multiple stroke shaving, which increase the probability and extent of shaving-induced skin irritation. Several features of male facial skin pose problems to a perfect shave. The male facial skin is heterogeneous in morphology and roughness, and male skin has a tendency to heal slower and to develop hyperinflammatory pigmentation. In addition, many males exhibit sensitive skin, with the face most often affected. Finally, the hair follicle is a sensory organ, and the perifollicular skin is highly responsive to external signals including mechanical and thermal stimulation. Perifollicular skin is rich in vasculature, innervation and cells of the innate and adaptive immune system. This makes perifollicular skin a highly responsive and inflammatory system, especially in individuals with sensitive skin. Activation of this system, by shaving, can result in shaving-induced skin irritation. Techniques commonly employed to avoid shaving-induced skin irritation include shaving with less pressure, pre- and post-shave skin treatment and to stop shaving altogether. Recent advances in shaving technology have addressed some but not all of these issues. A better understanding of beard hairs, beard hair follicles and male facial skin is needed to develop novel and better approaches to overcome the challenge of shaving. This article covers what is known about the physical properties of beard hairs and skin and why those present a challenge for blade and electric shaving, respectively. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
The Ardipithecus ramidus skull and its implications for hominid origins.
Suwa, Gen; Asfaw, Berhane; Kono, Reiko T; Kubo, Daisuke; Lovejoy, C Owen; White, Tim D
2009-10-02
The highly fragmented and distorted skull of the adult skeleton ARA-VP-6/500 includes most of the dentition and preserves substantial parts of the face, vault, and base. Anatomical comparisons and micro-computed tomography-based analysis of this and other remains reveal pre-Australopithecus hominid craniofacial morphology and structure. The Ardipithecus ramidus skull exhibits a small endocranial capacity (300 to 350 cubic centimeters), small cranial size relative to body size, considerable midfacial projection, and a lack of modern African ape-like extreme lower facial prognathism. Its short posterior cranial base differs from that of both Pan troglodytes and P. paniscus. Ar. ramidus lacks the broad, anteriorly situated zygomaxillary facial skeleton developed in later Australopithecus. This combination of features is apparently shared by Sahelanthropus, showing that the Mio-Pliocene hominid cranium differed substantially from those of both extant apes and Australopithecus.
Castori, Marco; Pascolini, Giulia; Parisi, Valentina; Sana, Maria Elena; Novelli, Antonio; Nürnberg, Peter; Iascone, Maria; Grammatico, Paola
2015-04-01
In 1980, a novel multiple malformation syndrome has been described in a 17-year-old woman with micro- and turricephaly, intellectual disability, distinctive facial appearance, congenital atrichia, and multiple skeletal anomalies mainly affecting the limbs. Four further sporadic patients and a couple of affected sibs are also reported with a broad clinical variability. Here, we describe a 4-year-old girl strikingly resembling the original report. Phenotype comparison identified a recurrent pattern of multisystem features involving the central nervous system, and skin and bones in five sporadic patients (including ours), while the two sibs and a further sporadic case show significant phenotypic divergence. Marked clinical variability within the same entity versus syndrome splitting is discussed and the term "cerebro-dermato-osseous dysplasia" is introduced to define this condition. © 2015 Wiley Periodicals, Inc.
[Determination of somatotype of man in cranio-facial personality identification].
2004-01-01
On the basis of their independent research and through the analysis of published data the authors suggested quantitative criteria for the diagnosis of a somatotype of man by the dimensional features of the face and skull. M. A. Negasheva method, based on the discriminative analysis of 7 measurement features, was used in the individual diagnosis of a somatotype by V. V. Bunaka scheme (somatotypes-pectoral, muscular, abdominal and indefinite). The authors suggest 2 diagnostic models based on the linear and discriminative analysis of 11 and 7 measurement features for the skull. The diagnostic accuracy in case of main male som-atotypes makes 87 and 64.4%, respectively, with the canonic correlations of 0.574 and 0.292. The designed methods can be used in forensic medicine for the cranio-facial and portrait expertise.
Adult preferences for infantile facial features: an ethological approach.
Sternglanz, S H; Gray, J L; Murakami, M
1977-02-01
In 1943 Konrad Lorenz postulated that certain infantile cues served as releasers for caretaking behaviour in human adults. This study is an attempt to confirm this hypothesis and to identify relevant cues. The stimuli studied were variations in facial features, and the responses were ratings of the attractiveness of the resultant infant faces. Parametric variations of eye height, eye width, eye height and width, iris size, and vertical variations in feature position (all presented in full-face drawings) were tested for their effect on the ratings, and highly significant preferences for particular stimuli were found. In general these preferences are consistent across a wide variety of environmental factors such as social class and experience with children. These findings are consistent with an ethological interpretation of the data.
Distinct growth of the nasomaxillary complex in Au. sediba.
Lacruz, Rodrigo S; Bromage, Timothy G; O'Higgins, Paul; Toro-Ibacache, Viviana; Warshaw, Johanna; Berger, Lee R
2015-10-15
Studies of facial ontogeny in immature hominins have contributed significantly to understanding the evolution of human growth and development. The recently discovered hominin species Autralopithecus sediba is represented by a well-preserved and nearly complete facial skeleton of a juvenile (MH1) which shows a derived facial anatomy. We examined MH1 using high radiation synchrotron to interpret features of the oronasal complex pertinent to facial growth. We also analyzed bone surface microanatomy to identify and map fields of bone deposition and bone resorption, which affect the development of the facial skeleton. The oronasal anatomy (premaxilla-palate-vomer architecture) is similar to other Australopithecus species. However surface growth remodeling of the midface (nasomaxillary complex) differs markedly from Australopithecus, Paranthropus, early Homo and from KNM-WT 15000 (H. erectus/ergaster) showing a distinct distribution of vertically disposed alternating depository and resorptive fields in relation to anterior dental roots and the subnasal region. The ontogeny of the MH1 midface superficially resembles some H. sapiens in the distribution of remodeling fields. The facial growth of MH1 appears unique among early hominins representing an evolutionary modification in facial ontogeny at 1.9 my, or to changes in masticatory system loading associated with diet.
Lee, W J; Won, K H; Won, C H; Chang, S E; Choi, J H; Moon, K C; Lee, M W
2014-05-01
Although more than 300 cases of eosinophilic pustular folliculitis (EPF) have been reported to date, differences in clinicohistopathological findings among affected sites have not yet been evaluated. To evaluate differences in the clinical and histopathological features of facial and extrafacial EPF. Forty-six patients diagnosed with EPF were classified into those with facial and extrafacial disease according to the affected site. Clinical and histopathological characteristics were retrospectively compared, using all data available in the patient medical records. There were no significant between-group differences in subject ages at presentation, but a male predominance was observed in the extrafacial group. In addition, immunosuppression-associated type EPF was more common in the extrafacial group. Eruptions of plaques with an annular appearance were more common in the facial group. Histologically, perifollicular infiltration of eosinophils occurred more frequently in the facial group, whereas perivascular patterns occurred more frequently in the extrafacial group. Follicular mucinosis and exocytosis of inflammatory cells in the hair follicles were strongly associated with facial EPF. The clinical and histopathological characteristics of patients with facial and extrafacial EPF differ, suggesting the involvement of different pathogenic processes in the development of EPF at different sites. © 2013 British Association of Dermatologists.
Human facial neural activities and gesture recognition for machine-interfacing applications.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
2011-01-01
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Walker, Mirella; Wänke, Michaela
2017-01-01
In two studies we disentangled and systematically investigated the impact of subtle facial cues to masculinity/femininity and gender category information on first impressions. Participants judged the same unambiguously male and female target persons-either with masculine or feminine facial features slightly enhanced-regarding stereotypically masculine (i.e., competence) and feminine (i.e., warmth) personality traits. Results of both studies showed a strong effect of facial masculinity/femininity: Masculine-looking persons were seen as colder and more competent than feminine-looking persons. This effect of facial masculinity/femininity was not only found for typical (i.e., masculine-looking men and feminine-looking women) and atypical (i.e., masculine-looking women and feminine-looking men) category members; it was even found to be more pronounced for atypical than for typical category members. This finding reveals that comparing atypical members to the group prototype results in pronounced effects of facial masculinity/femininity. These contrast effects for atypical members predominate assimilation effects for typical members. Intriguingly, very subtle facial cues to masculinity/femininity strongly guide first impressions and may have more impact than the gender category.
Walker, Mirella; Wänke, Michaela
2017-01-01
In two studies we disentangled and systematically investigated the impact of subtle facial cues to masculinity/femininity and gender category information on first impressions. Participants judged the same unambiguously male and female target persons–either with masculine or feminine facial features slightly enhanced–regarding stereotypically masculine (i.e., competence) and feminine (i.e., warmth) personality traits. Results of both studies showed a strong effect of facial masculinity/femininity: Masculine-looking persons were seen as colder and more competent than feminine-looking persons. This effect of facial masculinity/femininity was not only found for typical (i.e., masculine-looking men and feminine-looking women) and atypical (i.e., masculine-looking women and feminine-looking men) category members; it was even found to be more pronounced for atypical than for typical category members. This finding reveals that comparing atypical members to the group prototype results in pronounced effects of facial masculinity/femininity. These contrast effects for atypical members predominate assimilation effects for typical members. Intriguingly, very subtle facial cues to masculinity/femininity strongly guide first impressions and may have more impact than the gender category. PMID:29023451
Differences between Caucasian and Asian attractive faces.
Rhee, S C
2018-02-01
There are discrepancies between the public's current beauty desires and conventional theories and historical rules regarding facial beauty. This photogrammetric study aims to describe in detail mathematical differences in facial configuration between attractive Caucasian and attractive Asian faces. To analyse the structural differences between attractive Caucasian and attractive Asian faces, frontal face and lateral face views for each race were morphed; facial landmarks were defined, and the relative photographic pixel distances and angles were measured. Absolute values were acquired by arithmetic conversion for comparison. The data indicate that some conventional beliefs of facial attractiveness can be applied but others are no longer valid in explaining perspectives of beauty between Caucasians and Asians. Racial differences in the perceptions of attractive faces were evident. Common features as a phenomenon of global fusion in the perspectives on facial beauty were revealed. Beauty standards differ with race and ethnicity, and some conventional rules for ideal facial attractiveness were found to be inappropriate. We must reexamine old principles of facial beauty and continue to fundamentally question it according to its racial, cultural, and neuropsychological aspects. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp
2017-01-01
Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3–10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception. PMID:28052103
Contemporary solutions for the treatment of facial nerve paralysis.
Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R
2015-06-01
After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.
Impaired holistic processing of unfamiliar individual faces in acquired prosopagnosia.
Ramon, Meike; Busigny, Thomas; Rossion, Bruno
2010-03-01
Prosopagnosia is an impairment at individualizing faces that classically follows brain damage. Several studies have reported observations supporting an impairment of holistic/configural face processing in acquired prosopagnosia. However, this issue may require more compelling evidence as the cases reported were generally patients suffering from integrative visual agnosia, and the sensitivity of the paradigms used to measure holistic/configural face processing in normal individuals remains unclear. Here we tested a well-characterized case of acquired prosopagnosia (PS) with no object recognition impairment, in five behavioral experiments (whole/part and composite face paradigms with unfamiliar faces). In all experiments, for normal observers we found that processing of a given facial feature was affected by the location and identity of the other features in a whole face configuration. In contrast, the patient's results over these experiments indicate that she encodes local facial information independently of the other features embedded in the whole facial context. These observations and a survey of the literature indicate that abnormal holistic processing of the individual face may be a characteristic hallmark of prosopagnosia following brain damage, perhaps with various degrees of severity. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Modeling first impressions from highly variable facial images.
Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom
2014-08-12
First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.
Fixation to features and neural processing of facial expressions in a gender discrimination task
Neath, Karly N.; Itier, Roxane J.
2017-01-01
Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (~120 ms) for happy faces was seen at occipital sites and was sustained until ~350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ~150 ms until ~300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. PMID:26277653
Maniu, Alma Aurelia; Harabagiu, Oana; Damian, Laura Otilia; Ştefănescu, Eugen HoraŢiu; FănuŢă, Bogdan Marius; Cătană, Andreea; Mogoantă, Carmen Aurelia
2016-01-01
Several systemic diseases, including granulomatous and infectious processes, tumors, bone disorders, collagen-vascular and other autoimmune diseases may involve the middle ear and temporal bone. These diseases are difficult to diagnose when symptoms mimic acute otomastoiditis. The present report describes our experience with three such cases initially misdiagnosed. Their predominating symptoms were otological with mastoiditis, hearing loss, and subsequently facial nerve palsy. The cases were considered an emergency and the patients underwent tympanomastoidectomy, under the suspicion of otitis media with cholesteatoma, in order to remove a possible abscess and to decompress the facial nerve. The common features were the presence of severe granulation tissue filling the mastoid cavity and middle ear during surgery, without cholesteatoma. The definitive diagnoses was made by means of biopsy of the granulation tissue from the middle ear, revealing granulomatosis with polyangiitis (formerly known as Wegener's granulomatosis) in one case, middle ear tuberculosis and diffuse large B-cell lymphoma respectively. After specific associated therapy facial nerve functions improved, and atypical inflammatory states of the ear resolved. As a group, systemic diseases of the middle ear and temporal bone are uncommon, but aggressive lesions. After analyzing these cases and reviewing the literature, we would like to stress upon the importance of microscopic examination of the affected tissue, required for an accurate diagnosis and effective treatment.
Facial dysmorphism in Leigh syndrome with SURF-1 mutation and COX deficiency.
Yüksel, Adnan; Seven, Mehmet; Cetincelik, Umran; Yeşil, Gözde; Köksal, Vedat
2006-06-01
Leigh syndrome is an inherited, progressive neurodegenerative disorder of infancy and childhood. Mutations in the nuclear SURF-1 gene are specifically associated with cytochrome C oxidase-deficient Leigh syndrome. This report describes two patients with similar facial features. One of them was a 2(1/2)-year-old male, and the other was a 3-year-old male with a mutation in SURF-1 gene and facial dysmorphism including frontal bossing, brachycephaly, hypertrichosis, lateral displacement of inner canthi, esotropia, maxillary hypoplasia, hypertrophic gums, irregularly placed teeth, upturned nostril, low-set big ears, and retrognathi. The first patient's magnetic resonance imaging at 15 months of age indicated mild symmetric T2 prolongation involving the subthalamic nuclei. His second magnetic resonance imaging at 2 years old revealed a symmetric T2 prolongation involving the subthalamic nuclei, substantia nigra, and medulla lesions. In the second child, at the age of 2 the first magnetic resonance imaging documented heavy brainstem and subthalamic nuclei involvement. A second magnetic resonance imaging, performed when he was 3 years old, revealed diffuse involvement of the substantia nigra and hyperintense lesions of the central tegmental tract in addition to previous lesions. Facial dysmorphism and magnetic resonance imaging findings, observed in these cases, can be specific findings in Leigh syndrome patients with cytochrome C oxidase deficiency. SURF-1 gene mutations must be particularly reviewed in such patients.
Pervasive influence of idiosyncratic associative biases during facial emotion recognition.
El Zein, Marwa; Wyart, Valentin; Grèzes, Julie
2018-06-11
Facial morphology has been shown to influence perceptual judgments of emotion in a way that is shared across human observers. Here we demonstrate that these shared associations between facial morphology and emotion coexist with strong variations unique to each human observer. Interestingly, a large part of these idiosyncratic associations does not vary on short time scales, emerging from stable inter-individual differences in the way facial morphological features influence emotion recognition. Computational modelling of decision-making and neural recordings of electrical brain activity revealed that both shared and idiosyncratic face-emotion associations operate through a common biasing mechanism rather than an increased sensitivity to face-associated emotions. Together, these findings emphasize the underestimated influence of idiosyncrasies on core social judgments and identify their neuro-computational signatures.
Gernhardt, Ariane; Rübeling, Hartmut; Keller, Heidi
2015-01-01
This study investigated tadpole self-drawings from 183 three- to six-year-old children living in seven cultural groups, representing three ecosocial contexts. Based on assumed general production principles, the influence of cultural norms and values upon specific characteristics of the tadpole drawings was examined. The results demonstrated that children from all cultural groups realized the body-proportion effect in the self-drawings, indicating universal production principles. However, children differed in single drawing characteristics, depending on the specific ecosocial context. Children from Western and non-Western urban educated contexts drew themselves rather tall, with many facial features, and preferred smiling facial expressions, while children from rural traditional contexts depicted themselves significantly smaller, with less facial details, and neutral facial expressions. PMID:26136707
Pseudoacromegaly induced by the long-term use of minoxidil.
Nguyen, Kari H; Marks, James G
2003-06-01
Acromegaly is an endocrine disorder caused by chronic excessive growth hormone secretion from the anterior pituitary gland. Significant disfiguring changes occur as a result of bone, cartilage, and soft tissue hypertrophy, including the thickening of the skin, coarsening of facial features, and cutis verticis gyrata. Pseudoacromegaly, on the other hand, is the presence of similar acromegaloid features in the absence of elevated growth hormone or insulin-like growth factor levels. We present a patient with pseudoacromegaly that resulted from the long-term use of minoxidil at an unusually high dose. This is the first case report of pseudoacromegaly as a side effect of minoxidil use.
Isolated facial myokymia as a presenting feature of pontine neurocysticercosis.
Bhatia, Rohit; Desai, Soaham; Garg, Ajay; Padma, Madakasira V; Prasad, Kameshwar; Tripathi, Manjari
2008-01-01
A 45-year-old healthy man presented with 2 weeks history of continuous rippling and quivering movements of his right side of face and neck suggestive of myokymia. MRI scan of the head revealed neurocysticercus in the pons. Treatment with steroids and carbamezapine produced a significant benefit. This is the first report of pontine neurocysticercosis presenting as an isolated facial myokymia. 2007 Movement Disorder Society
Allanson, Judith; Smith, Amanda; Hare, Heather; Albrecht, Beate; Bijlsma, Emilia; Dallapiccola, Bruno; Donti, Emilio; Fitzpatrick, David; Isidor, Bertrand; Lachlan, Katherine; Le Caignec, Cedric; Prontera, Paolo; Raas-Rothschild, Annick; Rogaia, Daniela; van Bon, Bregje; Aradhya, Swaroop; Crocker, Susan F; Jarinova, Olga; McGowan-Jordan, Jean; Boycott, Kym; Bulman, Dennis; Fagerberg, Christina Ringmann
2012-09-01
Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last few years, several individuals with NMLFS have been reported to have a microdeletion of 8q21.3q22.1, demonstrated by microarray analysis. The minimal overlapping region is 93.98-96.22 Mb (hg19). Here we present clinical and microarray data from five singletons and two mother-child pairs who have heterozygous deletions significantly overlapping the region associated with NMLFS. Notably, while one mother and child were said to have mild tightening of facial skin, none of these individuals exhibited reduced facial expression or the classical facial phenotype of NMLFS. These findings indicate that deletion of the 8q21.3q22.1 region is necessary but not sufficient for development of the NMLFS. We discuss possible genetic mechanisms underlying the complex pattern of inheritance for this condition. Copyright © 2012 Wiley Periodicals, Inc.
Automatic detection of confusion in elderly users of a web-based health instruction video.
Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek
2015-06-01
Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
2016-01-13
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.
[Surgical treatment in otogenic facial nerve palsy].
Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng
2008-06-01
To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.
Three-Dimensional Anthropometric Evaluation of Facial Morphology.
Celebi, Ahmet Arif; Kau, Chung How; Ozaydin, Bunyamin
2017-07-01
The objectives of this study were to evaluate sexual dimorphism for facial features within Colombian and Mexican-American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface system, which captured 223 subjects from 2 population groups of Colombians (n = 131) and Mexican-Americans (n = 92). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 21 anthropometric landmarks were identified on the 3-dimensional faces of each subject. The independent t test was used to analyze each data set obtained within each subgroup. The Colombian males showed significantly greater width of the outercanthal width, eye fissure length, and orbitale than the Colombian females. The Colombian females had significantly smaller lip and mouth measurements for all distances except upper vermillion height than Colombian males. The Mexican-American females had significantly smaller measurements with regard to the nose than Mexican-American males. Meanwhile, the heights of the face, the upper face, the lower face, and the mandible were all significantly less in the Mexican-American females. The intercanthal and outercanthal widths were significantly greater in the Mexican-American males and females. Meanwhile, the orbitale distance of Mexican-American sexes was significantly smaller than those of the Colombian males and females. The Mexican-American group had significantly larger nose width and length of alare than the Colombian group regarding both sexes. With respect to the nasal tip protrusion and nose height, they were significantly smaller in the Colombian females than in the Mexican-American females. The face width was significantly greater in the Colombian males and females. Sexual dimorphism for facial features was presented in both the Colombian and Mexican-American populations. In addition, there were significant differences in facial morphology between these 2 populations.
Lin, A-S; Chang, S-S; Lin, S-H; Peng, Y-C; Hwu, H-G; Chen, W J
2015-07-01
Schizophrenia patients have higher rates of minor physical anomalies (MPAs) than controls, particularly in the craniofacial region; this difference lends support to the neurodevelopmental model of schizophrenia. Whether MPAs are associated with treatment response in schizophrenia remains unknown. The aim of this case-control study was to investigate whether more MPAs and specific quantitative craniofacial features in patients with schizophrenia are associated with operationally defined treatment resistance. A comprehensive scale, consisting of both qualitatively measured MPAs and quantitative measurements of the head and face, was applied in 108 patients with treatment-resistant schizophrenia (TRS) and in 104 non-TRS patients. Treatment resistance was determined according to the criteria proposed by Conley & Kelly (2001; Biological Psychiatry 50, 898-911). Our results revealed that patients with TRS had higher MPA scores in the mouth region than non-TRS patients, and the two groups also differed in four quantitative measurements (facial width, lower facial height, facial height, and length of the philtrum), after controlling for multiple comparisons using the false discovery rate. Among these dysmorphological measurements, three MPA item types (mouth MPA score, facial width, and lower facial height) and earlier disease onset were further demonstrated to have good discriminant validity in distinguishing TRS from non-TRS patients in a multivariable logistic regression analysis, with an area under the curve of 0.84 and a generalized R 2 of 0.32. These findings suggest that certain MPAs and craniofacial features may serve as useful markers for identifying TRS at early stages of the illness.
Bouquot, J E; LaMarche, M G
1999-02-01
Previous studies have identified focal areas of alveolar tenderness, elevated mucosal temperature, radiographic abnormality, and increased radioisotope uptake or "hot spots" within the quadrant of pain in most patients with chronic, idiopathic facial pain (phantom pain, atypical facial neuralgia, and atypical facial pain). This retrospective investigation radiographically and microscopically evaluated intramedullary bone in a certain subset of patients with histories of endodontics, extraction, and fixed partial denture placement in an area of "idiopathic" pain. Patients from 12 of the United States were identified through tissue samples, histories, and radiographs submitted to a national biopsy service. Imaging tests, coagulation tests, and microscopic features were reviewed. Of 38 consecutive idiopathic facial pain patients, 32 were women. Approximately 90% of subpontic bone demonstrated either ischemic osteonecrosis (68%), chronic osteomyelitis (21%), or a combination (11%). More than 84% of the patients had abnormal radiographic changes in subpontic bone, and 5 of 9 (56%) patients who underwent radioisotope bone scan revealed hot spots in the region. Of the 14 patients who had laboratory testing for coagulation disorders, 71% were positive for thrombophilia, hypofibrinolysis, or both (normal: 2% to 7%). Ten pain-free patients with abnormal subpontic bone on radiographs were also reviewed. Intraosseous ischemia and chronic inflammation were suggested as a pathoetiologic mechanism for at least some patients with atypical facial pain. These conditions were also offered as an explanation for poor healing of extraction sockets and positive radioisotope scans.
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
Modeling Face Identification Processing in Children and Adults.
ERIC Educational Resources Information Center
Schwarzer, Gudrun; Massaro, Dominic W.
2001-01-01
Two experiments studied whether and how 5-year-olds integrate single facial features to identify faces. Results indicated that children could evaluate and integrate information from eye and mouth features to identify a face when salience of features was varied. A weighted Fuzzy Logical Model of Perception fit better than a Single Channel Model,…
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
Hemizygosity at the elastin locus and clinical features of Williams syndrome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morimoto, Y; Kuwano, A.; Kuwajima, K.
1994-09-01
Williams syndrome is a recognizable syndrome characterized by distinctive facial appearance, gregarious personality, mental retardation, congenital heart defect, particularly supravalvular aortic stenosis (SVAS), and joint limitation. SVAS is an autosomal vascular disorder and the elastin gene was disrupted in patients with SVAS. Ewat et al. reported that hemizygosity at the elastin locus was detected in four familial and five sporadic cases of Williams syndrome. However, three patients did not have SVAS. We reconfirmed hemizygosity at the elastin locus in five patients with typical clinical features of Williams syndrome. Hemizygosity was detected in four cases with SVAS. However, one patient withmore » distinctive facial appearance and typical Williams syndrome personality had two alleles of the elastin gene, but he did not have the congenital heart anomaly. Williams syndrome is thought to be a contiguous gene disorder. Thus, our data suggest that the elastin gene is responsible for the vascular defect in patients with Williams syndrome, and flanking genes are responsible for characteristic facial appearance and personality.« less
Wang, Shu-Fan; Lai, Shang-Hong
2011-10-01
Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.
Morphological evaluation of clefts of the lip, palate, or both in dogs.
Peralta, Santiago; Fiani, Nadine; Kan-Rohrer, Kimi H; Verstraete, Frank J M
2017-08-01
OBJECTIVE To systematically characterize the morphology of cleft lip, cleft palate, and cleft lip and palate in dogs. ANIMALS 32 client-owned dogs with clefts of the lip (n = 5), palate (23), or both (4) that had undergone a CT or cone-beam CT scan of the head prior to any surgical procedures involving the oral cavity or face. PROCEDURES Dog signalment and skull type were recorded. The anatomic form of each defect was characterized by use of a widely used human oral-cleft classification system on the basis of CT findings and clinical images. Other defect morphological features, including shape, relative size, facial symmetry, and vomer involvement, were also recorded. RESULTS 9 anatomic forms of cleft were identified. Two anatomic forms were identified in the 23 dogs with cleft palate, in which differences in defect shape and size as well as vomer abnormalities were also evident. Seven anatomic forms were observed in 9 dogs with cleft lip or cleft lip and palate, and most of these dogs had incisive bone abnormalities and facial asymmetry. CONCLUSIONS AND CLINICAL RELEVANCE The morphological features of congenitally acquired cleft lip, cleft palate, and cleft lip and palate were complex and varied among dogs. The features identified here may be useful for surgical planning, developing of clinical coding schemes, or informing genetic, embryological, or clinical research into birth defects in dogs and other species.
Chen, Peng-Chieh; Wakimoto, Hiroko; Conner, David; Araki, Toshiyuki; Yuan, Tao; Roberts, Amy; Seidman, Christine E.; Bronson, Roderick; Neel, Benjamin G.; Seidman, Jonathan G.; Kucherlapati, Raju
2010-01-01
Noonan syndrome (NS) is an autosomal dominant genetic disorder characterized by short stature, unique facial features, and congenital heart disease. About 10%–15% of individuals with NS have mutations in son of sevenless 1 (SOS1), which encodes a RAS and RAC guanine nucleotide exchange factor (GEF). To understand the role of SOS1 in the pathogenesis of NS, we generated mice with the NS-associated Sos1E846K gain-of-function mutation. Both heterozygous and homozygous mutant mice showed many NS-associated phenotypes, including growth delay, distinctive facial dysmorphia, hematologic abnormalities, and cardiac defects. We found that the Ras/MAPK pathway as well as Rac and Stat3 were activated in the mutant hearts. These data provide in vivo molecular and cellular evidence that Sos1 is a GEF for Rac under physiological conditions and suggest that Rac and Stat3 activation might contribute to NS phenotypes. Furthermore, prenatal administration of a MEK inhibitor ameliorated the embryonic lethality, cardiac defects, and NS features of the homozygous mutant mice, demonstrating that this signaling pathway might represent a promising therapeutic target for NS. PMID:21041952
Ethnic and Gender Considerations in the Use of Facial Injectables: African-American Patients.
Burgess, Cheryl; Awosika, Olabola
2015-11-01
The United States is becoming increasingly more diverse as the nonwhite population continues to rise faster than ever. By 2044, the US Census Bureau projects that greater than 50% of the US population will be of nonwhite descent. Ethnic patients are the quickest growing portion of the cosmetic procedures market, with African-Americans comprising 7.1% of the 22% of ethnic minorities who received cosmetic procedures in the United States in 2014. The cosmetic concerns and natural features of this ethnic population are unique and guided by differing structural and aging processes than their white counterparts. As people of color increasingly seek nonsurgical cosmetic procedures, dermatologists and cosmetic surgeons must become aware that the Westernized look does not necessarily constitute beauty in these diverse people. The use of specialized aesthetic approaches and understanding of cultural and ethnic-specific features are warranted in the treatment of these patients. This article will review the key principles to consider when treating African-American patients, including the average facial structure of African-Americans, the impact of their ethnicity on aging and structure of face, and soft-tissue augmentation strategies specific to African-American skin.
Schweitzer, Daniela N; Yano, Shoji; Earl, Dawn L; Graham, John M
2003-07-30
In 1983, Johnson et al. described 16 related individuals with alopecia, anosmia or hyposmia, conductive hearing loss, microtia and/or atresia of the external auditory canal, and hypogonadotrophic hypogonadism inherited in an autosomal dominant pattern. Other less constant manifestations included facial asymmetry, mental retardation, congenital heart defect, cleft palate, and choanal stenosis. An isolated case was reported later (Johnston et al. [1987: Am J Med Genet 26: 925-927]) and thereafter an affected mother and son (Hennekam and Holtus [1993: Am J Med Genet 47: 714-716]). We describe an additional unrelated female patient with features resembling those of the previously reported cases. She presented with intrauterine growth deficiency, microcephaly, alopecia, bilateral microtia with canal atresia, conductive hearing loss, partial left facial palsy, posterior cleft palate, left choanal stenosis, tetralogy of Fallot, developmental delay, and right thumb polydactyly. Because the phenotypic abnormalities in this syndrome affect the brain, facial structures, ectoderm and its derivatives, outflow tract of the heart, and Rathke's pouch derivatives, this has suggested to previous authors etiologic involvement of the ectoderm and neuroectoderm of the first and second branchial arches, Rathke's pouch, and the diencephalon. Microtia with conductive hearing loss differentiates the condition from other ectodermal dysplasias. In the initial report, females appeared somewhat less affected than males, and there was male-to-male transmission. The mother of our patient manifests subtle features, which suggest she may be a mildly affected female. Additionally, there is a family history of early-onset alopecia in the maternal grandfather's relatives. Copyright 2003 Wiley-Liss, Inc.
Vioarsdóttir, Una Strand; O'Higgins, Paul; Stringer, Chris
2002-09-01
This study examines interpopulation variations in the facial skeleton of 10 modern human populations and places these in an ontogenetic perspective. It aims to establish the extent to which the distinctive features of adult representatives of these populations are present in the early post natal period and to what extent population differences in ontogenetic scaling and allometric trajectories contribute to distinct facial forms. The analyses utilize configurations of facial landmarks and are carried out using geometric morphometric methods. The results of this study show that modern human populations can be distinguished based on facial shape alone, irrespective of age or sex, indicating the early presence of differences. Additionally, some populations have statistically distinct facial ontogenetic trajectories that lead to the development of further differences later in ontogeny. We conclude that population-specific facial morphologies develop principally through distinctions in facial shape probably already present at birth and further accentuated and modified to variable degrees during growth. These findings raise interesting questions regarding the plasticity of facial growth patterns in modern humans. Further, they have important implications in relation to the study of growth in the face of fossil hominins and in relation to the possibility of developing effective discriminant functions for the identification of population affinities of immature facial skeletal material. Such tools would be of value in archaeological, forensic and anthropological applications. The findings of this study underline the need to examine more deeply, and in more detail, the ontogenetic basis of other causes of craniometric variation, such as sexual dimorphism and hominin species differentiation.
Foolad, Negar; Shi, Vivian Y; Prakash, Neha; Kamangar, Faranak; Sivamani, Raja K
2015-06-16
Rosacea and melasma are two common skin conditions in dermatology. Both conditions have a predilection for the centrofacial region where the sebaceous gland density is the highest. However it is not known if sebaceous function has an association with these conditions. We aimed to assess the relationship between facial glabellar wrinkle severity and facial sebum excretion rate for individuals with rosacea, melasma, both conditions, and in those with rhytides. Secondly, the purpose of this study was to utilize high resolution 3D facial modeling and measurement technology to obtain information regarding glabellar rhytid count and severity. A total of 21 subjects participated in the study. Subjects were divided into four groups based on facial features: rosacea-only, melasma-only, rosacea and melasma, rhytides-only. A high resolution facial photograph was taken followed by measurement of facial sebum excretion rate (SER). The SER was found to decline with age and with the presence of melasma. The SER negatively correlated with increasing Wrinkle Severity Rating Scale. Through the use of 3D facial modeling and skin analysis technology, we found a positive correlation between clinically based grading scores and computer generated glabellar rhytid count and severity. Continuing research with facial modeling and measurement systems will allow for development of more objective facial assessments. Future studies need to assess the role of technology in stratifying the severity and subtypes of rosacea and melasma. Furthermore, the role of sebaceous regulation may have important implications in photoaging.