Sample records for total face facial

  1. Validation of the facial dysfunction domain of the Penn Acoustic Neuroma Quality-of-Life (PANQOL) Scale.

    PubMed

    Lodder, Wouter L; Adan, Guleed H; Chean, Chung S; Lesser, Tristram H; Leong, Samuel C

    2017-06-01

    The objective of this study is to evaluate the strength of content validity within the facial dysfunction domain of the Penn Acoustic Neuroma Quality-of-Life (PANQOL) Scale and to compare how it correlates with a facial dysfunction-specific QOL instrument (Facial Clinimetric Evaluation, FaCE). The study design is online questionnaire survey. Members of the British Acoustic Neuroma Association received both PANQOL questionnaires and the FaCE scale. 158 respondents with self-identified facial paralysis or dysfunction had completed PANQOL and FaCE data sets for analysis. The mean composite PANQOL score was 53.5 (range 19.2-93.5), whilst the mean total FaCE score was 50.9 (range 10-95). The total scores of the PANQOL and FaCE correlated moderate (r = 0.48). Strong correlation (r = 0.63) was observed between the PANQOL's facial dysfunction domain and the FaCE total score. Of all the FaCE domains, social function was strongly correlated with the PANQOL facial dysfunction domain (r = 0.66), whilst there was very weak-to-moderate correlation (range 0.01-0.43) to the other FaCE domains. The current study has demonstrated a strong correlation between the facial dysfunction domains of PANQOL with a facial paralysis-specific QOL instrument.

  2. Ideal proportions in full face front view, contemporary versus antique.

    PubMed

    Mommaerts, M Y; Moerenhout, B A M M L

    2011-03-01

    To compare the facial proportions of contemporary harmonious faces with those of antiquity, to validate classical canons and to determine new ones useful in orthofacial surgery planning. Contemporary beautiful faces were retrieved from yearly polls of People Magazine and FHM. Selected B/W frontal facial photographs of 31 men and 74 women were ranked by 20 patients who had to undergo orthofacial surgery. The top-15 female faces and the top-10 male faces were analyzed with Scion Image software. The classical facial index, the Bruges facial index, the ratio lower facial height/total facial height and the vertical tri-partite of the lower face were calculated. The same analysis was done on pictures of classical sculptures representing seven goddesses and 12 gods. Harmonious contemporary female faces have a significantly lower classical facial index, indicating that facial height is less or facial width is larger than in male and even than in antique female faces. The Bruges index indicates a similar difference between ideal contemporary female and male faces. The contemporary male has a higher lower face (48%) compared to total facial height than the contemporary female (45%), although this is statistically not significant (P=0.08). The lower facial thirds index remained quite stabile for 2500 years, without gender difference. A good canon for both sexes today is stomion-gnathion being 70% of subnasale-stomion. The average ideal contemporary female face is shorter than the male face, given the fact that interpupillary distance is similar. The Vitruvian thirds in the lower face have to be adjusted to a 30% upper lip, 70% lower lip-chin proportion. The contemporary ideal ratios are suitable to be implemented in an orthofacial planning concept. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  3. Total Face, Eyelids, Ears, Scalp, and Skeletal Subunit Transplant Research Procurement: A Translational Simulation Model.

    PubMed

    Sosin, Michael; Ceradini, Daniel J; Hazen, Alexes; Sweeney, Nicole G; Brecht, Lawrence E; Levine, Jamie P; Staffenberg, David A; Saadeh, Pierre B; Bernstein, G Leslie; Rodriguez, Eduardo D

    2016-05-01

    Cadaveric face transplant models are routinely used for technical allograft design, perfusion assessment, and transplant simulation but are associated with substantial limitations. The purpose of this study was to describe the experience of implementing a translational donor research facial procurement and solid organ allograft recovery model. Institutional review board approval was obtained, and a 49-year-old, brain-dead donor was identified for facial vascularized composite allograft research procurement. The family generously consented to donation of solid organs and the total face, eyelids, ears, scalp, and skeletal subunit allograft. The successful sequence of computed tomographic scanning, fabrication and postprocessing of patient-specific cutting guides, tracheostomy placement, preoperative fluorescent angiography, silicone mask facial impression, donor facial allograft recovery, postprocurement fluorescent angiography, and successful recovery of kidneys and liver occurred without any donor instability. Preservation of the bilateral external carotid arteries, facial arteries, occipital arteries, and bilateral thyrolinguofacial and internal jugular veins provided reliable and robust perfusion to the entirety of the allograft. Total time of facial procurement was 10 hours 57 minutes. Essential to clinical face transplant outcomes is the preparedness of the institution, multidisciplinary face transplant team, organ procurement organization, and solid organ transplant colleagues. A translational facial research procurement and solid organ recovery model serves as an educational experience to modify processes and address procedural, anatomical, and logistical concerns for institutions developing a clinical face transplantation program. This methodical approach best simulates the stressors and challenges that can be expected during clinical face transplantation. Therapeutic, V.

  4. Rescue therapy by switching to total face mask after failure of face mask-delivered noninvasive ventilation in do-not-intubate patients in acute respiratory failure.

    PubMed

    Lemyze, Malcolm; Mallat, Jihad; Nigeon, Olivier; Barrailler, Stéphanie; Pepy, Florent; Gasan, Gaëlle; Vangrunderbeeck, Nicolas; Grosset, Philippe; Tronchon, Laurent; Thevenin, Didier

    2013-02-01

    To evaluate the impact of switching to total face mask in cases where face mask-delivered noninvasive mechanical ventilation has already failed in do-not-intubate patients in acute respiratory failure. Prospective observational study in an ICU and a respiratory stepdown unit over a 12-month study period. Switching to total face mask, which covers the entire face, when noninvasive mechanical ventilation using facial mask (oronasal mask) failed to reverse acute respiratory failure. Seventy-four patients with a do-not-intubate order and treated by noninvasive mechanical ventilation for acute respiratory failure. Failure of face mask-delivered noninvasive mechanical ventilation was associated with a three-fold increase in in-hospital mortality (36% vs. 10.5%; p = 0.009). Nevertheless, 23 out of 36 patients (64%) in whom face mask-delivered noninvasive mechanical ventilation failed to reverse acute respiratory failure and, therefore, switched to total face mask survived hospital discharge. Reasons for switching from facial mask to total face mask included refractory hypercapnic acute respiratory failure (n = 24, 66.7%), painful skin breakdown or facial mask intolerance (n = 11, 30%), and refractory hypoxemia (n = 1, 2.7%). In the 24 patients switched from facial mask to total face mask because of refractory hypercapnia, encephalopathy score (3 [3-4] vs. 2 [2-3]; p < 0.0001), PaCO2 (87 ± 25 mm Hg vs. 70 ± 17 mm Hg; p < 0.0001), and pH (7.24 ± 0.1 vs. 7.32 ± 0.09; p < 0.0001) significantly improved after 2 hrs of total face mask-delivered noninvasive ventilation. Patients switched early to total face mask (in the first 12 hrs) developed less pressure sores (n = 5, 24% vs. n = 13, 87%; p = 0.0002), despite greater length of noninvasive mechanical ventilation within the first 48 hrs (44 hrs vs. 34 hrs; p = 0.05) and less protective dressings (n = 2, 9.5% vs. n = 8, 53.3%; p = 0.007). The optimal cutoff value for face mask-delivered noninvasive mechanical ventilation duration in predicting facial pressure sores was 11 hrs (area under the receiver operating characteristic curve, 0.86 ± 0.04; 95% confidence interval 0.76-0.93; p < 0.0001; sensitivity, 84%; specificity, 71%). In patients in hypercapnic acute respiratory failure, for whom escalation to intubation is deemed inappropriate, switching to total face mask can be proposed as a last resort therapy when face mask-delivered noninvasive mechanical ventilation has already failed to reverse acute respiratory failure. This strategy is particularly adapted to provide prolonged periods of continuous noninvasive mechanical ventilation while preventing facial pressure sores.

  5. Total Face, Eyelids, Ears, Scalp, and Skeletal Subunit Transplant: A Reconstructive Solution for the Full Face and Total Scalp Burn.

    PubMed

    Sosin, Michael; Ceradini, Daniel J; Levine, Jamie P; Hazen, Alexes; Staffenberg, David A; Saadeh, Pierre B; Flores, Roberto L; Sweeney, Nicole G; Bernstein, G Leslie; Rodriguez, Eduardo D

    2016-07-01

    Reconstruction of extensive facial and scalp burns can be increasingly challenging, especially in patients that have undergone multiple procedures with less than ideal outcomes resulting in restricting neck and oral contractures, eyelid dysfunction, and suboptimal aesthetic appearance. To establish a reconstructive solution for this challenging deformity, a multidisciplinary team was assembled to develop the foundation to a facial vascularized composite allotransplantation program. The strategy of developing and executing a clinical transplant was derived on the basis of fostering a cohesive and supportive institutional clinical environment, implementing computer software and advanced technology, establishing a cadaveric transplant model, performing a research facial procurement, and selecting an optimal candidate with the aforementioned burn defect who was well informed and had the desire to undergo face transplantation. Approval from the institutional review board and organ procurement organization enabled our face transplant team to successfully perform a total face, eyelids, ears, scalp, and skeletal subunit transplant in a 41-year-old man with a full face and total scalp burn. The culmination of knowledge attained from previous experiences continues to influence the progression of facial vascularized composite allotransplantation. This surgical endeavor methodically and effectively synchronized the fundamental principles of aesthetic, craniofacial, and microvascular surgery to restore appearance and function to a patient suffering from failed conventional surgery for full face and total scalp burns. This procedure represents the most extensive soft-tissue clinical face transplant performed to date. Therapeutic, V.

  6. Correlations between impairment, psychological distress, disability, and quality of life in peripheral facial palsy.

    PubMed

    Díaz-Aristizabal, U; Valdés-Vilches, M; Fernández-Ferreras, T R; Calero-Muñoz, E; Bienzobas-Allué, E; Moracén-Naranjo, T

    2017-05-23

    This paper analyses the correlations between scores on scales assessing impairment, psychological distress, disability, and quality of life in patients with peripheral facial palsy (PFP). We conducted a retrospective cross-sectional study including 30 patients in whom PFP had not resolved completely. We used tools for assessing impairment (Sunnybrook Facial Grading System [FGS]), psychological distress (Hospital Anxiety and Depression Scale [HADS]), disability (Facial Disability Index [FDI]), and quality of life (Facial Clinimetric Evaluation [FaCE] scale). We found no correlations between FGS and HADS scores, or between FGS and FDI social function scores. However, we did find a correlation between FGS and FDI physical function scores (r=0.54; P<.01), FDI total score (r=0.4; P<.05), FaCE total scores (ρ=0.66; P<.01), and FaCE social function scores (ρ=0.5; P<.01). We also observed a correlation between HADS Anxiety scores and FDI physical function (r=-0.47; P<.01), FDI social function (r=-0.47; P<.01), FDI total (r=-0.55; P<.01), FaCE total (ρ=-0.49; P<.01), and FaCE social scores (ρ=-0.46; P<.05). Significant correlations were also found between HADS Depression scores and FDI physical function (r=-0.61; P<.01), FDI social function (r=-0.53; P<.01), FDI total (r=-0.66; P<.01), FaCE total (ρ=-0.67; P<.01), and FaCE social scores (ρ=-0.68; P<.01), between FDI physical function scores and FaCE total scores (ρ=0.87; P<.01) and FaCE social function (ρ=0.74; P<.01), between FDI social function and FaCE total (ρ=0.66; P<.01) and FaCE social function scores (ρ=0.72; P<.01), and between FDI total scores and FaCE total (ρ = 0,87; P<.01) and FaCE social function scores (ρ=0.84; P<.01). In our sample, patients with more severe impairment displayed greater physical and global disability and poorer quality of life without significantly higher levels of social disability and psychological distress. Patients with more disability experienced greater psychological distress and had a poorer quality of life. Lastly, patients with more psychological distress also had a poorer quality of life. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  8. Influence of using a single facial vein as outflow in full-face transplantation: A three-dimensional computed tomographic study.

    PubMed

    Rodriguez-Lorenzo, Andres; Audolfsson, Thorir; Wong, Corrine; Cheng, Angela; Arbique, Gary; Nowinski, Daniel; Rozen, Shai

    2015-10-01

    The aim of this study was to evaluate the contribution of a single unilateral facial vein in the venous outflow of total-face allograft using three-dimensional computed tomographic imaging techniques to further elucidate the mechanisms of venous complications following total-face transplant. Full-face soft-tissue flaps were harvested from fresh adult human cadavers. A single facial vein was identified and injected distally to the submandibular gland with a radiopaque contrast (barium sulfate/gelatin mixture) in every specimen. Following vascular injections, three-dimensional computed tomographic venographies of the faces were performed. Images were viewed using TeraRecon Software (Teracon, Inc., San Mateo, CA, USA) allowing analysis of the venous anatomy and perfusion in different facial subunits by observing radiopaque filling venous patterns. Three-dimensional computed tomographic venographies demonstrated a venous network with different degrees of perfusion in subunits of the face in relation to the facial vein injection side: 100% of ipsilateral and contralateral forehead units, 100% of ipsilateral and 75% of contralateral periorbital units, 100% of ipsilateral and 25% of contralateral cheek units, 100% of ipsilateral and 75% of contralateral nose units, 100% of ipsilateral and 75% of contralateral upper lip units, 100% of ipsilateral and 25% of contralateral lower lip units, and 50% of ipsilateral and 25% of contralateral chin units. Venographies of the full-face grafts revealed better perfusion in the ipsilateral hemifaces from the facial vein in comparison with the contralateral hemifaces. Reduced perfusion was observed mostly in the contralateral cheek unit and contralateral lower face including the lower lip and chin units. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. From partial to full-face transplantation: total ablation and restoration, a change in the reconstructive paradigm.

    PubMed

    Barret, Juan P

    2014-01-01

    The innovation of composite vascularized allotransplantation has provided plastic and reconstructive surgeons with the ultimate tool for those patients that present with facial deformities that cannot be reconstructed with classical or more traditional techniques. Transplanting normal tissues allows for a true restorative surgery. Initial experiences included the substitution of missing anatomy, whereas after the first world's full-face transplant performed in Barcelona in March 2010, a true ablative surgery with a total restoration proved to be effective. We review the world's experience and the performance of our restorative protocol to depict this change in the reconstructive paradigm of facial transplantation. Facial transplants should be performed after a careful analysis of the defect, with a comprehensive ablation plan following esthetic units with sacrifice of all required tissues with a focus of global restoration of anatomy, aesthetics and function, respecting normal functioning muscles. Nowadays, facial transplants following strict esthetic units should restore disfigurement extending to small central areas, whereas major defects may require a total ablation and restoration with full-face transplants. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion. Copyright © 2014 the authors 0270-6474/14/3412457-13$15.00/0.

  11. Treatment outcome of bimaxillary surgery for asymmetric skeletal class II deformity.

    PubMed

    Chen, Yun-Fang; Liao, Yu-Fang; Chen, Yin-An; Chen, Yu-Ray

    2018-05-04

    Facial asymmetry is one of the main concerns in patients with a dentofacial deformity. The aims of the study were to (1) evaluate the changes in facial asymmetry after bimaxillary surgery for asymmetric skeletal class II deformity and (2) compare preoperative and postoperative facial asymmetry of class II patients with normal controls. The facial asymmetry was assessed for 30 adults (21 women and 9 men, mean age: 29.3 years) who consecutively underwent bimaxillary surgery for asymmetric skeletal class II deformity using cone-beam computed tomography before and at least 6 months after surgery. Thirty soft tissue and two dental landmarks were identified on each three-dimensional facial image, and the asymmetry index of each landmark was calculated. Results were compared with those of 30 normal control subjects (21 women and 9 men, mean age: 26.2 years) with skeletal class I structure. Six months after surgery, the asymmetric index of the lower face and total face decreased significantly (17.8 ± 29.4 and 16.6 ± 29.5 mm, respectively, both p < 0.01), whereas the asymmetric index of the middle face increased significantly (1.2 ± 2.2 mm, p < 0.01). Postoperatively, 53% of the class II patients had residual chin asymmetry. The postoperative total face asymmetric index was positively correlated with the preoperative asymmetric index (r = 0.37, p < 0.05). Bimaxillary surgery for patients with asymmetric class II deformity resulted in a significant improvement in lower face asymmetry. However, approximately 50% of the patients still had residual chin asymmetry. The total face postoperative asymmetry was moderately related to the initial severity of asymmetry. These findings could help clinicians better understand orthognathic outcomes on different facial regions for patients with asymmetric class II deformity.

  12. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    PubMed

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. Analysis of facial motion patterns during speech using a matrix factorization algorithm

    PubMed Central

    Lucero, Jorge C.; Munhall, Kevin G.

    2008-01-01

    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866

  14. Rating Nasolabial Aesthetics in Unilateral Cleft Lip and Palate Patients: Cropped Versus Full-Face Images.

    PubMed

    Schwirtz, Roderic M F; Mulder, Frans J; Mosmuller, David G M; Tan, Robin A; Maal, Thomas J; Prahl, Charlotte; de Vet, Henrica C W; Don Griot, J Peter W

    2018-05-01

    To determine if cropping facial images affects nasolabial aesthetics assessments in unilateral cleft lip patients and to evaluate the effect of facial attractiveness on nasolabial evaluation. Two cleft surgeons and one cleft orthodontist assessed standardized frontal photographs 4 times; nasolabial aesthetics were rated on cropped and full-face images using the Cleft Aesthetic Rating Scale, and total facial attractiveness was rated on full-face images with and without the nasolabial area blurred using a 5-point Likert scale. Cleft Palate Craniofacial Unit of a University Medical Center. Inclusion criteria: nonsyndromic unilateral cleft lip and an available frontal view photograph around 10 years of age. a history of facial trauma and an incomplete cleft. Eighty-one photographs were available for assessment. Differences in mean CARS scores between cropped versus full-face photographs and attractive versus unattractive rated patients were evaluated by paired t test. Nasolabial aesthetics are scored more negatively on full-face photographs compared to cropped photographs, regardless of facial attractiveness. (Mean CARS score, nose: cropped = 2.8, full-face = 3.0, P < .001; lip: cropped = 2.4, full-face = 2.7, P < .001; nose and lip: cropped = 2.6, full-face = 2.8, P < .001). Aesthetic outcomes of the nasolabial area are assessed significantly more positively when using cropped images compared to full-face images. For this reason, cropping images, revealing the nasolabial area only, is recommended for aesthetical assessments.

  15. First U.S. near-total human face transplantation: a paradigm shift for massive complex injuries.

    PubMed

    Siemionow, Maria Z; Papay, Frank; Djohan, Risal; Bernard, Steven; Gordon, Chad R; Alam, Daniel; Hendrickson, Mark; Lohman, Robert; Eghtesad, Bijan; Fung, John

    2010-01-01

    Severe complex facial injuries are difficult to reconstruct and require multiple surgical procedures. The potential of performing complex craniofacial reconstruction in one surgical procedure is appealing, and composite face allograft transplantation may be considered an alternative option. The authors describe establishment of the Cleveland Clinic face transplantation program that led them to perform the first U.S. near-total face transplantation. In November of 2004, the authors received the world's first institutional review board approval to perform a face transplant in humans. In December of 2008, after a 22-hour operation, the authors performed the first near-total face transplantation in the United States, replacing 80 percent of the patient's traumatic facial deficit with a composite allograft from a brain-dead donor. This largest, and most complex, face allograft in the world included over 535 cm2 of facial skin; functional units of full nose with nasal lining and bony skeleton; lower eyelids and upper lip; underlying muscles and bones, including orbital floor, zygoma, maxilla, alveolus with teeth, hard palate, and parotid glands; and pertinent nerves, arteries, and veins. Immunosuppressive treatment consisted of thymoglobulin, tacrolimus, mycophenolate mofetil, and prednisone. The patient tolerated the procedure and immunosuppression well. At day 47 after transplantation, routine biopsy showed rejection of the graft mucosa without clinical evidence of skin or graft rejection. The patient's physical and psychological recovery went well. The functional outcome has been excellent, including optimal return of breathing through the nose, smelling, tasting, speaking, drinking from a cup, and eating solid foods. The functional outcome thus far at 8 months is rewarding and confirms the feasibility of performing complex reconstruction of severely disfigured patients in a single surgical procedure of facial allotransplantation.

  16. Self-Esteem and Facial Attractiveness in Learning Disabled Children.

    ERIC Educational Resources Information Center

    Cooper, Patricia S.

    1993-01-01

    A total of 55 learning-disabled children ages 8 to 13 years completed a self-esteem measure, and photographs of their faces were rated for attractiveness by adults and peers. Found relationships between children's facial attractiveness and self-esteem and between adult and peer ratings of facial attractiveness. Found no gender differences in…

  17. The Development of Perceptual Sensitivity to Second-Order Facial Relations in Children

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Gallay, Mathieu; Durand, Karine; Robichon, Fabrice

    2010-01-01

    This study investigated children's perceptual ability to process second-order facial relations. In total, 78 children in three age groups (7, 9, and 11 years) and 28 adults were asked to say whether the eyes were the same distance apart in two side-by-side faces. The two faces were similar on all points except the space between the eyes, which was…

  18. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Marquardt’s Facial Golden Decagon Mask and Its Fitness with South Indian Facial Traits

    PubMed Central

    Gandikota, Chandra Sekhar; Yadagiri, Poornima K; Manne, Ranjit; Juvvadi, Shubhaker Rao; Farah, Tamkeen; Vattipelli, Shilpa; Gumbelli, Sangeetha

    2016-01-01

    Introduction The mathematical ratio of 1:1.618 which is famously known as golden ratio seems to appear recurrently in beautiful things in nature as well as in other things that are seen as beautiful. Dr. Marquardt developed a facial golden mask that contains and includes all of the one-dimensional and two-dimensional geometric golden elements formed from the golden ratio and he claimed that beauty is universal, beautiful faces conforms to the facial golden mask regardless of sex and race. Aim The purpose of this study was to evaluate the goodness of fit of the golden facial mask with the South Indian facial traits. Materials and Methods A total of 150 subjects (75 males & 75 females) with attractive faces were selected with cephalometric orthodontic standards of a skeletal class I relation. The facial aesthetics was confirmed by the aesthetic evaluation of the frontal photographs of the subjects by a panel of ten evaluators including five orthodontists and five maxillofacial surgeons. The well-proportioned photographs were superimposed with the Golden mask along the reference lines, to evaluate the goodness of fit. Results South Indian males and females invariably show a wider inter-zygomatic and inter-gonial width than the golden mask. Most of the South Indian females and males show decreased mid-facial height compared to the golden mask, while the total facial height is more or less equal to the golden mask. Conclusion Ethnic or individual discrepancies cannot be totally ignored as in our study the mask did not fit exactly with the South Indian facial traits but, the beauty ratios came closer to those of the mask. To overcome this difficulty, there is a need to develop variants of golden facial mask for different ethnic groups. PMID:27190951

  20. Use of 3-dimensional surface acquisition to study facial morphology in 5 populations.

    PubMed

    Kau, Chung How; Richmond, Stephen; Zhurov, Alexei; Ovsenik, Maja; Tawfik, Wael; Borbely, Peter; English, Jeryl D

    2010-04-01

    The aim of this study was to assess the use of 3-dimensional facial averages for determining morphologic differences from various population groups. We recruited 473 subjects from 5 populations. Three-dimensional images of the subjects were obtained in a reproducible and controlled environment with a commercially available stereo-photogrammetric camera capture system. Minolta VI-900 (Konica Minolta, Tokyo, Japan) and 3dMDface (3dMD LLC, Atlanta, Ga) systems were used. Each image was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was performed until average composite faces of 1 man and 1 woman were achieved for each subgroup. These average facial composites were superimposed based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed among the groups. The linear differences between surface shells ranged from 0.37 to 1.00 mm for the male groups. The linear differences ranged from 0.28 and 0.87 mm for the women. The color histograms showed that the similarities in facial shells between the subgroups by sex ranged from 26.70% to 70.39% for men and 36.09% to 79.83% for women. The average linear distance from the signed color histograms for the male subgroups ranged from -6.30 to 4.44 mm. The female subgroups ranged from -6.32 to 4.25 mm. Average faces can be efficiently and effectively created from a sample of 3-dimensional faces. Average faces can be used to compare differences in facial morphologies for various populations and sexes. Facial morphologic differences were greatest when totally different ethnic variations were compared. Facial morphologic similarities were present in comparable groups, but there were large variations in concentrated areas of the face. Copyright 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  1. Photo-anthropometric study on face among Garo adult females of Bangladesh.

    PubMed

    Akhter, Z; Banu, M L A; Alam, M M; Hossain, S; Nazneen, M

    2013-08-01

    Facial anthropometry has well-known implications in health-related fields. Measurement of human face is used in identification of person in Forensic medicine, Plastic surgery, Orthodontics, Archeology, Hair-style design and examination of the differences between races and ethnicities. Facial anthropometry provides an indication of the variations in facial shape in a specified population. Bangladesh harbours many cultures and people of different races because of the colonial rules of the past regimes. Standards based on ethnic or racial data are desirable because these standards reflect the potentially different patterns of craniofacial growth resulting from racial, ethnic and sexual differences. In the above context, the present study was attempted to establish ethnic specific anthropometric data for the Christian Garo adult females of Bangladesh. The study was an observational, cross-sectional and primarily descriptive in nature with some analytical components and it was carried out with a total number of 100 Christian Garo adult females aged between 25-45 years. Three vertical facial dimensions such as facial height from 'trichion' to 'gnathion', nasal length and total vermilion height were measured by photographic method. Though these measurements were taken by photographic method but they were converted into actual size using one of the physically measured variables between two angles of the mouth (chilion to chilion). The data were then statistically analyzed by computation to find out its normatic value. The study also observed the possible 'correlation' between the facial height from 'trichion' to 'gnathion' with nasal length and total vermilion height. Multiplication factors were estimated for estimating facial height from nasal length and total vermilion height. Comparison were made between 'estimated' values with the 'measured' values by using't' test. The mean (+/- SD) of nasal length and total vermilion height were 4.53 +/- 0.36 cm and 1.63 +/- 0.23 cm respectively and the mean (+/- SD) of facial height from 'trichion' to 'gnathion' was 16.88 +/- 1.11 cm. Nasal length and total vermilion height showed also a significant positive correlation with facial height from 'trichion' to 'gnathion'. No significant difference was found between the 'measured' and 'estimated' facial height from 'trichion' to 'gnathion' for nasal length and total vermilion height.

  2. Three-dimensional evaluation of the relationship between jaw divergence and facial soft tissue dimensions.

    PubMed

    Rongo, Roberto; Antoun, Joseph Saswat; Lim, Yi Xin; Dias, George; Valletta, Rosa; Farella, Mauro

    2014-09-01

    To evaluate the relationship between mandibular divergence and vertical and transverse dimensions of the face. A sample was recruited from the orthodontic clinic of the University of Otago, New Zealand. The recruited participants (N  =  60) were assigned to three different groups based on the mandibular plane angle (hyperdivergent, n  =  20; normodivergent, n  =  20; and hypodivergent, n  =  20). The sample consisted of 31 females and 29 males, with a mean age of 21.1 years (SD ± 5.0). Facial scans were recorded for each participant using a three-dimensional (3D) white-light scanner and then merged to form a single 3D image of the face. Vertical and transverse measurements of the face were assessed from the 3D facial image. The hyperdivergent sample had a significantly larger total and lower anterior facial height than the other two groups (P < .05), although no difference was found for the middle facial height (P > .05). Similarly, there were no significant differences in the transverse measurements of the three study groups (P > .05). Both gender and body mass index (BMI) had a greater influence on the transverse dimension. Hyperdivergent facial types are associated with a long face but not necessarily a narrow face. Variations in facial soft tissue vertical and transversal dimensions are more likely to be due to gender. Body mass index has a role in mandibular width (GoGo) assessment.

  3. The Oval Female Facial Shape--A Study in Beauty.

    PubMed

    Goodman, Greg J

    2015-12-01

    Our understanding of who is beautiful seems to be innate but has been argued to conform to mathematical principles and proportions. One aspect of beauty is facial shape that is gender specific. In women, an oval facial shape is considered attractive. To study the facial shape of beautiful actors, pageant title winners, and performers across ethnicities and in different time periods and to construct an ideal oval shape based on the average of their facial shape dimensions. Twenty-one full-face photographs of purportedly beautiful female actors, performers, and pageant winners were analyzed and an oval constructed from their facial parameters. Only 3 of the 21 faces were totally symmetrical, with the most larger in the left upper and lower face. The average oval was subsequently constructed from an average bizygomatic distance (horizontal parameter) of 4.3 times their intercanthal distance (ICD) and a vertical dimension that averaged 6.3 times their ICD. This average oval could be fitted to many of the individual subjects showing a smooth flow from the forehead through temples, cheeks, jaw angle, jawline, and chin with all these facial aspects abutting the oval. Where they did not abut, treatment may have improved these subjects.

  4. The assessment of facial variation in 4747 British school children.

    PubMed

    Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen

    2012-12-01

    The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.

  5. Interference among the Processing of Facial Emotion, Face Race, and Face Gender.

    PubMed

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).

  6. Interference among the Processing of Facial Emotion, Face Race, and Face Gender

    PubMed Central

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621

  7. Dermabrasion and staged excision of facial lesions in a neurofibromatosis case for improvement of facial appearance.

    PubMed

    Karabekmez, Furkan Erol; Duymaz, Ahmet; Karacor, Zeynep

    2013-01-01

    Neurofibromatosis may present with different skin lesions. Disfiguring lesions on the face might be challenging for the surgeon or clinician to correct and may have adverse effects on patients' social lives, especially in young women. To present the dermabrasion technique combined with serial excisions of a deeper accompanying lesion to treat superficial facial lesions in a young neurofibromatosis patient. Dermabrasion was applied to superficial lesions on the face, and staged excision was applied to the deeper lesion located on the forehead. We obtained high patient satisfaction with the result. The deep lesion was excised totally, and superficial lesions were decreased with dermabrasion. Dermabrasion may become a good alternative in cases of neurofibromatosis with superficial facial lesions.

  8. Vertical facial height and its correlation with facial width and depth: Three dimensional cone beam computed tomography evaluation based on dry skulls.

    PubMed

    Wang, Ming Feng; Otsuka, Takero; Akimoto, Susumu; Sato, Sadao

    2013-01-01

    The aim of the present study was to evaluate how vertical facial height correlates with mandibular plane angle, facial width and depth from a three dimensional (3D) viewing angle. In this study 3D cephalometric landmarks were identified and measurements from 43 randomly selected cone beam computed tomography (CBCT) images of dry skulls from the Weisbach collection of Vienna Natural History Museum were analyzed. Pearson correlation coefficients of facial height measurements and mandibular plane angle and the correlation coefficients of height-width and height-depth were calculated, respectively. The mandibular plane angle (MP-SN) significantly correlated with ramus height (Co-Go) and posterior facial height (PFH) but not with anterior lower face height (ALFH) or anterior total face height (ATFH). The ALFH and ATFH showed significant correlation with anterior cranial base length (S-N), whereas PFH showed significant correlation with the mandible (S-B) and maxilla (S-A) anteroposterior position. High or low mandibular plane angle might not necessarily be accompanied by long or short anterior face height, respectively. The PFH rather than AFH is assumed to play a key role in the vertical facial type whereas AFH seems to undergo relatively intrinsic growth.

  9. Effects of facial attractiveness on personality stimuli in an implicit priming task: an ERP study.

    PubMed

    Zhang, Yan; Zheng, Minxiao; Wang, Xiaoying

    2016-08-01

    Using event-related potentials (ERPs) in a priming paradigm, this study examines implicit priming in the association of personality words with facial attractiveness. A total of 16 participants (8 males and 8 females; age range, 19-24 years; mean age, 21.30 years) were asked to judge the color (red and green) of positive or negative personality words after exposure to priming stimuli (attractive and unattractive facial images). The positive personality words primed by attractive faces or the negative personality words primed by unattractive faces were defined as congruent trials, whereas the positive personality words primed by unattractive faces or the negative personality words primed by attractive faces were defined as incongruent trials. Behavioral results showed that compared with the unattractive faces trials, the trials that attractive faces being the priming stimuli had longer reaction times and higher accuracy rates. Moreover, a more negative ERP deflection (N2) component was observed in the ERPs of the incongruent condition than in the ERPs of the congruent condition. In addition, the personality words presented after the attractive faces elicited larger amplitudes from the frontal region to the central region (P2 and P350-550 ms) compared with the personality words after unattractive faces as priming stimuli. The study provides evidence for the facial attractiveness stereotype ('What is beautiful is good') through an implicit priming task.

  10. Facial Resurfacing With Coblation Technology

    PubMed Central

    Weber, Stephen M.; Downs, Brian W.; Ferraz, Mario B.J.; Wang, Tom D.; Cook, Ted A.

    2008-01-01

    Objective To describe our experience with coblation technology for facial resurfacing Methods Retrospective chart review of all patients treated with coblation at our institution Results Twenty-four patients (22 female) underwent a total of 29 coblation procedures for aging face (n = 21) or acne scarring (n = 3). The perioral region was the most frequently treated aesthetic subunit (n = 14), followed by the lower eyelid (n = 7). Five patients underwent full-face coblation. Three patients underwent a second coblation procedure for aging face while a single patient with severe acne scarring underwent 3 procedures. Repeat coblation was delayed at least 5 months (mean, 9 months). Seventeen coblation procedures (59%) were performed concurrently with procedures including, but not limited to, injection treatment, rhinoplasty, blepharoplasty, or combined face/necklift; no adverse events occurred. Seven procedures, including a full-face coblation, were performed in the office under local anesthesia and oral sedation without any adverse events. Mean follow-up was 6 months (range, 1 week to 24 months). No complications were observed. All patients were satisfied with the results after their final coblation treatment. Conclusions Facial coblation is a safe and effective treatment modality for facial resurfacing. PMID:18769690

  11. [How five-year-old children distribute rewards: effects of the amount of reward and a crying face].

    PubMed

    Tsutsu, Kiyomi

    2013-10-01

    Five-year-old children were presented with two scenes in which one character made three stars and the other made nine stars. In one of the scenes, both characters' facial expressions were neutral (neutral face scene), and in the other scene the character who produced three stars had a crying face (crying face scene). Children distributed different numbers of rewards to the two characters: equal to (Middle-N), less than (Small-N), or more than (Large-N) the total number of stars in each scene. Then the children were asked for their reason after they distributed the rewards. It was found that (a) the participants' distributions depended on the total number of rewards but (b) not on the characters' facial expressions, and (c) the justifications of their distributions in the Middle-N condition were different between the scenes. These results suggest that the total number of rewards triggers an automatic distribution process, and that an ex post facto justification takes place when needed.

  12. Visual attention during the evaluation of facial attractiveness is influenced by facial angles and smile.

    PubMed

    Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J

    2018-05-01

    To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.

  13. The Facial Platysma and Its Underappreciated Role in Lower Face Dynamics and Contour.

    PubMed

    de Almeida, Ada R T; Romiti, Alessandra; Carruthers, Jean D A

    2017-08-01

    The platysma is a superficial muscle involved in important features of the aging neck. Vertical bands, horizontal lines, and loss of lower face contour are effectively treated with botulinum toxin A (BoNT-A). However, its pars facialis, mandibularis, and modiolaris have been underappreciated. To demonstrate the role of BoNT-A treatment of the upper platysma and its impact on lower face dynamics and contour. Retrospective analysis of cases treated by an injection pattern encompassing the facial platysma components, aiming to block the lower face as a whole complex. It consisted of 2 intramuscular injections into the mentalis muscle and 2 horizontal lines of BoNT-A injections superficially performed above and below the mandible (total dose, 16 onabotulinumtoxinA U/side). Photographs were taken at rest and during motion (frontal and oblique views), before and after treatment. A total of 161 patients have been treated in the last 2 years with the following results: frontal and lateral enhancement of lower facial contour, relaxation of high horizontal lines located just below the lateral mandibular border, and lower deep vertical smile lines present lateral to the oral commissures and melomental folds. The upper platysma muscle plays a relevant role in the functional anatomy of the lower face that can be modulated safely with neuromodulators.

  14. Head-and-face anthropometric survey of Chinese workers.

    PubMed

    Du, Lili; Zhuang, Ziqing; Guan, Hongyu; Xing, Jingcai; Tang, Xianzhi; Wang, Limin; Wang, Zhenglun; Wang, Haijiao; Liu, Yuewei; Su, Wenjin; Benson, Stacey; Gallagher, Sean; Viscusi, Dennis; Chen, Weihong

    2008-11-01

    Millions of workers in China rely on respirators and other personal protective equipment to reduce the risk of injury and occupational diseases. However, it has been >25 years since the first survey of facial dimensions for Chinese adults was published, and it has never been completely updated. Thus, an anthropometric survey of Chinese civilian workers was conducted in 2006. A total of 3000 subjects (2026 males and 974 females) between the ages of 18 and 66 years old was measured using traditional techniques. Nineteen facial dimensions, height, weight, neck circumference, waist circumference and hip circumference were measured. A stratified sampling plan of three age strata and two gender strata was implemented. Linear regression analysis was used to evaluate the possible effects of gender, age, occupation and body size on facial dimensions. The regression coefficients for gender indicated that for all anthropometric dimensions, males had significantly larger measurements than females. As body mass index increased, dimensions measured increased significantly. Construction workers and miners had significantly smaller measurements than individuals employed in healthcare or manufacturing for a majority of dimensions. Five representative indexes of facial dimension (face length, face width, nose protrusion, bigonial breadth and nasal root breadth) were selected based on correlation and cluster analysis of all dimensions. Through comparison with the facial dimensions of American subjects, this study indicated that Chinese civilian workers have shorter face length, smaller nose protrusion, larger face width and longer lip length.

  15. Prevalence of long face pattern in Brazilian individuals of different ethnic backgrounds

    PubMed Central

    CARDOSO, Mauricio de Almeida; de CASTRO, Renata Cristina Faria Ribeiro; LI AN, Tien; NORMANDO, David; GARIB, Daniela Gamba; CAPELOZZA FILHO, Leopoldino

    2013-01-01

    Objective: The long face pattern is a facial deformity with increased anterior total facial height due to vertical excess of the lower facial third. Individuals with long face may present different degrees of severity in vertical excess, as well as malocclusions that are difficult to manage. The categorization of vertical excess is useful to determine the treatment prognosis. This survey assessed the distribution of ethnically different individuals with vertical excess according to three levels of severity and determined the prevalence of long face pattern. Material and Methods: The survey was comprised of 5,020 individuals of Brazilian ethnicity (2,480 females and 2,540 males) enrolled in middle schools in Bauru-SP, Brazil. The criterion for inclusion of individuals with vertically impaired facial relationships was based on lip incompetence, evaluated under natural light, in standing natural head position with the lips at rest. Once identified, the individuals were classified into three subtypes according to the severity: mild, moderate, and severe. Then the pooled sample was distributed according to ethnical background as White (Caucasoid), Black (African descent), Brown (mixed descent), Yellow (Asian descent) and Brazilian Indian (Brazilian native descent). The Chi-square (χ2) test was used (p<0.05) to compare the frequency ratios of individuals with vertically impaired facial relationships in the total sample and among different ethnicities, according to the three levels of severity. Results: The severe subtype was rare, except in Black individuals (7.32%), who also presented the highest relative frequency (45.53%) of moderate subtype, followed by Brown individuals (43.40%). In the mild subtype, Yellow (68.08%) and White individuals (62.21%) showed similar and higher relative frequency values. Conclusions: Black individuals had greater prevalence of long face pattern, followed by Brown, White and Yellow individuals. The prevalence of long face pattern was 14.06% in which 13.39% and 0.68% belonged to moderate and severe subtypes, respectively. PMID:23739865

  16. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  17. Deficits in Facial Emotion Recognition in Schizophrenia: A Replication Study with Korean Subjects

    PubMed Central

    Lee, Seung Jae; Lee, Hae-Kook; Kweon, Yong-Sil; Lee, Chung Tai

    2010-01-01

    Objective We investigated the deficit in the recognition of facial emotions in a sample of medicated, stable Korean patients with schizophrenia using Korean facial emotion pictures and examined whether the possible impairments would corroborate previous findings. Methods Fifty-five patients with schizophrenia and 62 healthy control subjects completed the Facial Affect Identification Test with a new set of 44 colored photographs of Korean faces including the six universal emotions as well as neutral faces. Results Korean patients with schizophrenia showed impairments in the recognition of sad, fearful, and angry faces [F(1,114)=6.26, p=0.014; F(1,114)=6.18, p=0.014; F(1,114)=9.28, p=0.003, respectively], but their accuracy was no different from that of controls in the recognition of happy emotions. Higher total and three subscale scores of the Positive and Negative Syndrome Scale (PANSS) correlated with worse performance on both angry and neutral faces. Correct responses on happy stimuli were negatively correlated with negative symptom scores of the PANSS. Patients with schizophrenia also exhibited different patterns of misidentification relative to normal controls. Conclusion These findings were consistent with previous studies carried out with different ethnic groups, suggesting cross-cultural similarities in facial recognition impairment in schizophrenia. PMID:21253414

  18. The effect of the buccal corridor and tooth display on smile attractiveness.

    PubMed

    Niaki, Esfandiar Akhavan; Arab, Sepideh; Shamshiri, Ahmadreza; Imani, Mohammad Moslem

    2015-11-01

    The aim of the present study was to evaluate the lay perception of the effect of the buccal corridor and amount of tooth-gingival display on the attractiveness of a smile in different facial types. Using Adobe Photoshop CS3 software, frontal facial images of two smiling Iranian female subjects (one short-faced and one long-faced) were altered to create different magnitudes of buccal corridor display (5, 10, 15, 20 and 25%) and tooth-gingival display (2 mm central incisor show, 6 mm central incisor show, total central incisor show, total tooth show with 2 mm gingival show and total tooth show with 4 mm gingival show). Sixty Iranians (30 males and 30 females) rated the attractiveness of the pictures on a 1-5 point scale. Narrower smiles were preferred in long-faced subjects compared with short-faced subjects. Minimal tooth show was more attractive than excessive gingival display in short-faced subjects. There were no gender specific, statistically significant differences found in the ratings given by the lay assessors. Harmonious geometry of the smile and face in both the vertical and transverse dimensions influences smile attractiveness and this should be considered in orthodontic treatment planning.

  19. Comparison of Facial Proportions Between Beauty Pageant Contestants and Ordinary Young Women of Korean Ethnicity: A Three-Dimensional Photogrammetric Analysis.

    PubMed

    Kim, Sung-Chan; Kim, Hyung Bae; Jeong, Woo Shik; Koh, Kyung S; Huh, Chang Hun; Kim, Hee Jin; Lee, Woo Shun; Choi, Jong Woo

    2018-06-01

    Although the harmony of facial proportions is traditionally perceived as an important element of facial attractiveness, there have been few objective studies that have investigated this esthetic balance using three-dimensional photogrammetric analysis. To better understand why some women appear more beautiful, we investigated differences in facial proportions between beauty pageant contestants and ordinary young women of Korean ethnicity using three-dimensional (3D) photogrammetric analyses. A total of 43 prize-winning beauty pageant contestants (group I) and 48 ordinary young women (group II) of Korean ethnicity were photographed using 3D photography. Numerous soft tissue landmarks were identified, and 3D photogrammetric analyses were performed to evaluate 13 absolute lengths, 5 angles, 3 volumetric proportions, and 12 length proportions between soft tissue landmarks. Group I had a greater absolute length of the middle face, nose height, and eye height and width; a smaller absolute length of the lower face, intercanthal width, and nasal width; a larger nasolabial angle; a greater proportion of the upper and middle facial volume, nasal height, and eye height and width; and a lower proportion of the lower facial volume, lower face height, intercanthal width, nasal width, and mouth width. All these differences were statistically significant. These results indicate that there are significant differences between the faces of beauty pageant contestants and ordinary young women, and help elucidate which factors contribute to facial beauty. The group I mean values could be used as reference values for attractive facial profiles. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  20. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.

  1. Facial contrast is a cue for perceiving health from the face.

    PubMed

    Russell, Richard; Porcheron, Aurélie; Sweda, Jennifer R; Jones, Alex L; Mauger, Emmanuelle; Morizot, Frederique

    2016-09-01

    How healthy someone appears has important social consequences. Yet the visual cues that determine perceived health remain poorly understood. Here we report evidence that facial contrast-the luminance and color contrast between internal facial features and the surrounding skin-is a cue for the perception of health from the face. Facial contrast was measured from a large sample of Caucasian female faces, and was found to predict ratings of perceived health. Most aspects of facial contrast were positively related to perceived health, meaning that faces with higher facial contrast appeared healthier. In 2 subsequent experiments, we manipulated facial contrast and found that participants perceived faces with increased facial contrast as appearing healthier than faces with decreased facial contrast. These results support the idea that facial contrast is a cue for perceived health. This finding adds to the growing knowledge about perceived health from the face, and helps to ground our understanding of perceived health in terms of lower-level perceptual features such as contrast. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. External facial features modify the representation of internal facial features in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2010-08-15

    Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  3. Expanded Transposition Flap Technique for Total and Subtotal Resurfacing of the Face and Neck

    PubMed Central

    Spence, Robert J.

    2007-01-01

    Background: The reconstruction of major burn and other deformities resulting from significant soft tissue deficits of the face and neck is a continuing challenge for surgeons who wish to reliably restore facial function and aesthetic appearance. A primary problem is deficiency of well-matched donor skin. Other problems include the unique characteristics of facial skin, the fine anatomic nuances, and the unique functional demands placed on the face. This article describes an expanded shoulder transposition flap that can provide a large amount of both flap and full-thickness skin graft for total and subtotal reconstruction of the face. Methods: An expanded shoulder transposition flap has been used since 1986 for head and neck resurfacing 58 times in 41 patients ranging in age from 2 to 62 years. The details of the technique and the results of the flap including complications are described. Results: The flap proved remarkably reliable and reproducible in resurfacing the peripheral facial aesthetic units. The pedicle skin is often used for grafting of the central face with its finer features. The donor site of the flap is closed primarily. Conclusions: Twenty years' experience with expanded transposition flaps has shown it to be reliable and versatile in the reconstruction of major soft tissue deficits of the face and neck. It is a technique that provides economy of tissue, versatility, and is well within the skill, patience, and courage of most reconstructive surgeons. PMID:17534420

  4. Facial approximation-from facial reconstruction synonym to face prediction paradigm.

    PubMed

    Stephan, Carl N

    2015-05-01

    Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.

  5. Facial Scar Revision: Understanding Facial Scar Treatment

    MedlinePlus

    ... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...

  6. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    PubMed

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  7. The Role of Gestures and Facial Cues in Second Language Listening Comprehension

    ERIC Educational Resources Information Center

    Sueyoshi, Ayano; Hardison, Debra M.

    2005-01-01

    This study investigated the contribution of gestures and facial cues to second-language learners' listening comprehension of a videotaped lecture by a native speaker of English. A total of 42 low-intermediate and advanced learners of English as a second language were randomly assigned to 3 stimulus conditions: AV-gesture-face audiovisual including…

  8. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates

    PubMed Central

    Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui

    2014-01-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898

  9. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  11. Does Facial Resemblance Enhance Cooperation?

    PubMed Central

    Giang, Trang; Bell, Raoul; Buchner, Axel

    2012-01-01

    Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system. PMID:23094095

  12. On the facilitative effects of face motion on face recognition and its development

    PubMed Central

    Xiao, Naiqi G.; Perrotta, Steve; Quinn, Paul C.; Wang, Zhe; Sun, Yu-Hao P.; Lee, Kang

    2014-01-01

    For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts. PMID:25009517

  13. Facial resurfacing with a monoblock full-thickness skin graft after multiple malignant melanomas excision in xeroderma pigmentosum.

    PubMed

    Ozmen, Selahattin; Uygur, Safak; Eryilmaz, Tolga; Ak, Betul

    2012-09-01

    Xeroderma pigmentosum is an autosomal recessive disease, characterized by vulnerability of the skin to solar radiation. Increase in sunlight-induced cancer is a direct consequence of an increase in mutated cells of the skin of patients with xeroderma pigmentosum. There is no specific technique for facial resurfacing in patients with xeroderma pigmentosum. In this article, a patient with xeroderma pigmentosum with multiple malignant melanomas on her face and radical excision of total facial skin followed by facial resurfacing with monoblock full-thickness skin graft from the abdomen is presented.

  14. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical axis. Published by Elsevier Ltd.

  15. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Clinical outcomes of facial transplantation: a review.

    PubMed

    Shanmugarajah, Kumaran; Hettiaratchy, Shehan; Clarke, Alex; Butler, Peter E M

    2011-01-01

    A total of 18 composite tissue allotransplants of the face have currently been reported. Prior to the start of the face transplant programme, there had been intense debate over the risks and benefits of performing this experimental surgery. This review examines the surgical, functional and aesthetic, immunological and psychological outcomes of facial transplantation thus far, based on the predicted risks outlined in early publications from teams around the world. The initial experience has demonstrated that facial transplantation is surgically feasible. Functional and aesthetic outcomes have been very encouraging with good motor and sensory recovery and improvements to important facial functions observed. Episodes of acute rejection have been common, as predicted, but easily controlled with increases in systemic immunosuppression. Psychological improvements have been remarkable and have resulted in the reintegration of patients into the outside world, social networks and even the workplace. Complications of immunosuppression and patient mortality have been observed in the initial series. These have highlighted rigorous patient selection as the key predictor of success. The overall early outcomes of the face transplant programme have been generally more positive than many predicted. This initial success is testament to the robust approach of teams. Dissemination of outcomes and ongoing refinement of the process may allow facial transplantation to eventually become a first-line reconstructive option for those with extensive facial disfigurements. Copyright © 2011 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  17. Three-Dimensional Anthropometric Evaluation of Facial Morphology.

    PubMed

    Celebi, Ahmet Arif; Kau, Chung How; Ozaydin, Bunyamin

    2017-07-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Colombian and Mexican-American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface system, which captured 223 subjects from 2 population groups of Colombians (n = 131) and Mexican-Americans (n = 92). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 21 anthropometric landmarks were identified on the 3-dimensional faces of each subject. The independent t test was used to analyze each data set obtained within each subgroup. The Colombian males showed significantly greater width of the outercanthal width, eye fissure length, and orbitale than the Colombian females. The Colombian females had significantly smaller lip and mouth measurements for all distances except upper vermillion height than Colombian males. The Mexican-American females had significantly smaller measurements with regard to the nose than Mexican-American males. Meanwhile, the heights of the face, the upper face, the lower face, and the mandible were all significantly less in the Mexican-American females. The intercanthal and outercanthal widths were significantly greater in the Mexican-American males and females. Meanwhile, the orbitale distance of Mexican-American sexes was significantly smaller than those of the Colombian males and females. The Mexican-American group had significantly larger nose width and length of alare than the Colombian group regarding both sexes. With respect to the nasal tip protrusion and nose height, they were significantly smaller in the Colombian females than in the Mexican-American females. The face width was significantly greater in the Colombian males and females. Sexual dimorphism for facial features was presented in both the Colombian and Mexican-American populations. In addition, there were significant differences in facial morphology between these 2 populations.

  18. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  19. Changing perception: facial reanimation surgery improves attractiveness and decreases negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Lisa E

    2014-01-01

    Determine the effect of facial reanimation surgery on observer-graded attractiveness and negative facial perception of patients with facial paralysis. Randomized controlled experiment. Ninety observers viewed images of paralyzed faces, smiling and in repose, before and after reanimation surgery, as well as normal comparison faces. Observers rated the attractiveness of each face and characterized the paralyzed faces by rating severity, disfigured/bothersome, and importance to repair. Iterated factor analysis indicated these highly correlated variables measure a common domain, so they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score. Mixed effects linear regression determined the effect of facial reanimation surgery on attractiveness and DIBS score. Facial paralysis induces an attractiveness penalty of 2.51 on a 10-point scale for faces in repose and 3.38 for smiling faces. Mixed effects linear regression showed that reanimation surgery improved attractiveness for faces both in repose and smiling by 0.84 (95% confidence interval [CI]: 0.67, 1.01) and 1.24 (95% CI: 1.07, 1.42) respectively. Planned hypothesis tests confirmed statistically significant differences in attractiveness ratings between postoperative and normal faces, indicating attractiveness was not completely normalized. Regression analysis also showed that reanimation surgery decreased DIBS by 0.807 (95% CI: 0.704, 0.911) for faces in repose and 0.989 (95% CI: 0.886, 1.093), an entire standard deviation, for smiling faces. Facial reanimation surgery increases attractiveness and decreases negative facial perception of patients with facial paralysis. These data emphasize the need to optimize reanimation surgery to restore not only function, but also symmetry and cosmesis to improve facial perception and patient quality of life. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women.

    PubMed

    Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects' Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability.

  1. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women

    PubMed Central

    Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects’ Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability. PMID:27977692

  2. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  3. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis.

    PubMed

    Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp

    2017-01-01

    Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3-10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception.

  4. Dynamics of processing invisible faces in the brain: automatic neural encoding of facial expression information.

    PubMed

    Jiang, Yi; Shannon, Robert W; Vizueta, Nathalie; Bernat, Edward M; Patrick, Christopher J; He, Sheng

    2009-02-01

    The fusiform face area (FFA) and the superior temporal sulcus (STS) are suggested to process facial identity and facial expression information respectively. We recently demonstrated a functional dissociation between the FFA and the STS as well as correlated sensitivity of the STS and the amygdala to facial expressions using an interocular suppression paradigm [Jiang, Y., He, S., 2006. Cortical responses to invisible faces: dissociating subsystems for facial-information processing. Curr. Biol. 16, 2023-2029.]. In the current event-related brain potential (ERP) study, we investigated the temporal dynamics of facial information processing. Observers viewed neutral, fearful, and scrambled face stimuli, either visibly or rendered invisible through interocular suppression. Relative to scrambled face stimuli, intact visible faces elicited larger positive P1 (110-130 ms) and larger negative N1 or N170 (160-180 ms) potentials at posterior occipital and bilateral occipito-temporal regions respectively, with the N170 amplitude significantly greater for fearful than neutral faces. Invisible intact faces generated a stronger signal than scrambled faces at 140-200 ms over posterior occipital areas whereas invisible fearful faces (compared to neutral and scrambled faces) elicited a significantly larger negative deflection starting at 220 ms along the STS. These results provide further evidence for cortical processing of facial information without awareness and elucidate the temporal sequence of automatic facial expression information extraction.

  5. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  6. Premaxilla: an independent bone that can base therapeutics for middle third growth!

    PubMed

    Trevizan, Mariana; Consolaro, Alberto

    2017-01-01

    Premaxilla, in its early descriptions, had the participation of Goethe. In our face, in a certain period of growth and development processes, premaxilla is an independent and, then, a semi-independent bone to finally be totally integrated to the maxilla. Formation of the premaxilla acts as a stabilization element inside the facial skeleton comparable to the cornerstone of a Roman arch and is closely related to the development of human face and its abnormal growth with characteristic malformations. Until when the premaxillary-maxillary suture remains open and offers opportunities to orthopedically influence facial growth to exert influence over facial esthetics and function? Contact with preliminary results in 1183 skulls from anatomic museums at USP, Unicamp and Unifesp led us to question therapeutic perspectives and its clinical applicability.

  7. Contrasting Specializations for Facial Motion Within the Macaque Face-Processing System

    PubMed Central

    Fisher, Clark; Freiwald, Winrich A.

    2014-01-01

    SUMMARY Facial motion transmits rich and ethologically vital information [1, 2], but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain [3, 4], and facial motion activates these patches and surrounding areas [5, 6]. Yet it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery’s organization might be. To address these questions, we used functional magnetic resonance imaging (fMRI) to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore-unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system. PMID:25578903

  8. Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions.

    PubMed

    Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori

    2013-11-01

    This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.

  9. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  10. The Effect of Target Sex, Sexual Dimorphism, and Facial Attractiveness on Perceptions of Target Attractiveness and Trustworthiness

    PubMed Central

    Hu, Yuanyan; Abbasi, Najam ul Hasan; Zhang, Yang; Chen, Hong

    2018-01-01

    Facial sexual dimorphism has widely demonstrated as having an influence on the facial attractiveness and social interactions. However, earlier studies show inconsistent results on the effect of sexual dimorphism on facial attractiveness judgments. Previous studies suggest that the level of attractiveness might work as a moderating variable among the relationship between sexual dimorphism and facial preference and have often focused on the effect of sexual dimorphism on general attractiveness ratings, rather than concentrating on trustworthiness perception. Male and female participants viewed target male and female faces that varied on attractiveness (more attractive or less attractive) and sexual dimorphism (masculine or feminine). Participants rated the attractiveness of the faces and reported how much money they would give to the target person as a measure of trust. For the facial attractiveness ratings, (a) both men and women participants preferred masculine male faces to feminine male ones under the more attractive condition, whereas preferred feminine male faces to masculine male ones under the less attractive condition; (b) all participants preferred feminine female faces to masculine female ones under the less attractive condition, while there were no differences between feminine female faces and masculine female faces under the more attractive condition. For the target trustworthiness perception, (a) participants showed no preference between masculine male faces and feminine male faces, neither under the more attractive condition nor the less attractiveness condition; (b) however, all the participants preferred masculine female faces over feminine female faces under the more attractive condition, exhibiting no preference between feminine female faces and masculine female faces under the less attractive condition. These findings suggest that the attractiveness of facial stimulus may be a reason to interpret the inconsistent results from the previous studies, which focused on the effect of facial sexual dimorphism on the facial attractiveness. Furthermore, implications about the effect of target facial sexual dimorphism on participants’ trustworthiness perception were discussed.

  11. Two-dimensional morphometric analysis of young Asian females to determine attractiveness.

    PubMed

    Pothanikat, Joseph John K; Balakrishna, Ramdas; Mahendra, P; Neeta, J

    2015-01-01

    Attractive people do not seem to consistently possess such ideal characteristics or share common features. There is no general consensus about the linear and angular characteristics that discriminate between attractive and normal persons. This study determines how young Asian women considered to be attractive differ in their twodimensional facial characteristics from normal women of the same age and race. Frontal and lateral photographs of 70 young Asian females were taken under standardized setting and were given to 15 judges who did not know the subjects in the study, to rate the attractiveness of each photograph. All 70 photographs were arranged in descending order of their total score by all the judges and were classified into three groups. Three angular, 8 linear measurements, and 3 ratios were compared between these groups. This study showed that most attractive group had least convex face, larger forehead, and wider faces. Conversely, the middle facial height was larger in the least attractive group. The ratio of middle third to total face of the most attractive group is higher than the average attractive ones. The ratio of lower third to total face of the most attractive group is lower than the average attractive ones.

  12. Frontal facial proportions of 12-year-old southern Chinese: a photogrammetric study.

    PubMed

    Yeung, Charles Yat Cheong; McGrath, Colman Patrick; Wong, Ricky Wing Kit; Hägg, Erik Urban Oskar; Lo, John; Yang, Yanqi

    2015-08-14

    This study aimed to establish norm values for facial proportion indices among 12-year-old southern Chinese children, to determine lower facial proportion, and to identify gender differences in facial proportions.A random population sample of 514 children was recruited. Fifteen facial landmarks were plotted with ImageJ (V1.45) on standardized photos and 22 Facial proportion index values were obtained. Gender differences were analyzed by 2-sample t-test with 95% confidence interval. Repeated measurements were conducted on approximately 10% of the cases.The rate of adopted subjects was 52.5% (270/514). Intraclass correlation coefficient values (ICC) for intra- examiner reliability were >0.87. Population facial proportion index values were derived. Gender differences in 11 of the facial proportion indices were evident (P < 0.05).Upper face-face height (N- Sto/ N- Gn), vermilion height (Ls-Sto/Sto-Li), upper face height-biocular width (N-Sto/ExR-ExL) and nose -face height (N-Sn/N-Gn) indices were found to be larger among girls (P < 0.01). Males had larger lower face-face height (Sn -Gn/ N-Gn), mandibulo-face height (Sto-Gn/N-Gn), mandibulo-upper face height (Sto-Gn/N-Sto), nasal (AlR-AlL/N-Sn), upper lip height-mouth width (Sn-Sto/ChR-ChL), upper lip-upper face height (Sn-Sto/N-Sto) and upper lip-nose height (Sn-Sto/N-Sn) indices (P < 0.05).Population norm of facial proportion indices for 12-year-old Southern Chinese were derived and mean lower facial proportion were obtained. Sexual dimorphism is apparent.

  13. A shape-based account for holistic face processing.

    PubMed

    Zhao, Mintao; Bülthoff, Heinrich H; Bülthoff, Isabelle

    2016-04-01

    Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects. (c) 2016 APA, all rights reserved).

  14. Identity recognition and happy and sad facial expression recall: influence of depressive symptoms.

    PubMed

    Jermann, Françoise; van der Linden, Martial; D'Argembeau, Arnaud

    2008-05-01

    Relatively few studies have examined memory bias for social stimuli in depression or dysphoria. The aim of this study was to investigate the influence of depressive symptoms on memory for facial information. A total of 234 participants completed the Beck Depression Inventory II and a task examining memory for facial identity and expression of happy and sad faces. For both facial identity and expression, the recollective experience was measured with the Remember/Know/Guess procedure (Gardiner & Richardson-Klavehn, 2000). The results show no major association between depressive symptoms and memory for identities. However, dysphoric individuals consciously recalled (Remember responses) more sad facial expressions than non-dysphoric individuals. These findings suggest that sad facial expressions led to more elaborate encoding, and thereby better recollection, in dysphoric individuals.

  15. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  16. Exposure to the self-face facilitates identification of dynamic facial expressions: influences on individual differences.

    PubMed

    Li, Yuan Hang; Tottenham, Nim

    2013-04-01

    A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  18. Patient satisfaction after zygoma and mandible reduction surgery: an outcome assessment.

    PubMed

    Choi, Bong-Kyoon; Goh, Raymond C W; Moaveni, Zachary; Lo, Lun-Jou

    2010-08-01

    An ovoid and slender face is considered attractive in Oriental culture, and facial bony contouring is frequently performed in Asian countries to achieve this desired facial profile. Despite their popularity, critical analyses of patients' satisfaction after facial-bone contouring surgery is lacking in the current literature. Questionnaires were sent to 90 patients who had undergone zygoma and/or mandibular contouring by a single surgeon at the Craniofacial Center, Chang Gung Memorial Hospital, Taiwan. The number of patients who had mandibular angle reduction and zygoma reduction were 78 and 36, respectively. The questionnaire contained 20 questions, concerning aesthetic and surgical results, psychosocial benefits and general outcome. Medical records were also reviewed for correlation with the questionnaire findings. The survey response rate was 52.2% (47 patients). A total of 95.7% were satisfied with the symmetry of their face after surgery, and 97.9% felt that there was improvement in their final facial appearance. As many as 61.7% could not feel an objectionable new jaw line or bony step and 66.0% could not detect any visible deformity. A total of 87.2% could not detect bony regrowth after surgery. Complication after surgery was experienced by 17.0% of patients, but all of these recovered without long-term consequences. All patients noted a positive psychosocial influence, and 97.9% of patients said that they would undergo the same surgery again under similar circumstances and would recommend the same surgery to friends. The majority of patients with square face seeking facial bone contouring surgery are satisfied with their final appearance. Of equal importance is the ability for this type of surgery to have a positive influence on the patient's psychosocial environment. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces.

    PubMed

    Brielmann, Aenne A; Bülthoff, Isabelle; Armann, Regine

    2014-07-01

    Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: (1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? (2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face's race. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Aspects of Facial Contrast Decrease with Age and Are Cues for Age Perception

    PubMed Central

    Porcheron, Aurélie; Mauger, Emmanuelle; Russell, Richard

    2013-01-01

    Age is a primary social dimension. We behave differently toward people as a function of how old we perceive them to be. Age perception relies on cues that are correlated with age, such as wrinkles. Here we report that aspects of facial contrast–the contrast between facial features and the surrounding skin–decreased with age in a large sample of adult Caucasian females. These same aspects of facial contrast were also significantly correlated with the perceived age of the faces. Individual faces were perceived as younger when these aspects of facial contrast were artificially increased, but older when these aspects of facial contrast were artificially decreased. These findings show that facial contrast plays a role in age perception, and that faces with greater facial contrast look younger. Because facial contrast is increased by typical cosmetics use, we infer that cosmetics function in part by making the face appear younger. PMID:23483959

  1. What's in a face? The role of skin tone, facial physiognomy, and color presentation mode of facial primes in affective priming effects.

    PubMed

    Stepanova, Elena V; Strube, Michael J

    2012-01-01

    Participants (N = 106) performed an affective priming task with facial primes that varied in their skin tone and facial physiognomy, and, which were presented either in color or in gray-scale. Participants' racial evaluations were more positive for Eurocentric than for Afrocentric physiognomy faces. Light skin tone faces were evaluated more positively than dark skin tone faces, but the magnitude of this effect depended on the mode of color presentation. The results suggest that in affective priming tasks, faces might not be processed holistically, and instead, visual features of facial priming stimuli independently affect implicit evaluations.

  2. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  3. Facial movements strategically camouflage involuntary social signals of face morphology.

    PubMed

    Gill, Daniel; Garrod, Oliver G B; Jack, Rachael E; Schyns, Philippe G

    2014-05-01

    Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.

  4. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  5. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis

    PubMed Central

    Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp

    2017-01-01

    Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3–10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception. PMID:28052103

  6. Attractiveness as a Function of Skin Tone and Facial Features: Evidence from Categorization Studies.

    PubMed

    Stepanova, Elena V; Strube, Michael J

    2018-01-01

    Participants rated the attractiveness and racial typicality of male faces varying in their facial features from Afrocentric to Eurocentric and in skin tone from dark to light in two experiments. Experiment 1 provided evidence that facial features and skin tone have an interactive effect on perceptions of attractiveness and mixed-race faces are perceived as more attractive than single-race faces. Experiment 2 further confirmed that faces with medium levels of skin tone and facial features are perceived as more attractive than faces with extreme levels of these factors. Black phenotypes (combinations of dark skin tone and Afrocentric facial features) were rated as more attractive than White phenotypes (combinations of light skin tone and Eurocentric facial features); ambiguous faces (combinations of Afrocentric and Eurocentric physiognomy) with medium levels of skin tone were rated as the most attractive in Experiment 2. Perceptions of attractiveness were relatively independent of racial categorization in both experiments.

  7. Heritabilities of Facial Measurements and Their Latent Factors in Korean Families

    PubMed Central

    Kim, Hyun-Jin; Im, Sun-Wha; Jargal, Ganchimeg; Lee, Siwoo; Yi, Jae-Hyuk; Park, Jeong-Yeon; Sung, Joohon; Cho, Sung-Il; Kim, Jong-Yeol; Kim, Jong-Il; Seo, Jeong-Sun

    2013-01-01

    Genetic studies on facial morphology targeting healthy populations are fundamental in understanding the specific genetic influences involved; yet, most studies to date, if not all, have been focused on congenital diseases accompanied by facial anomalies. To study the specific genetic cues determining facial morphology, we estimated familial correlations and heritabilities of 14 facial measurements and 3 latent factors inferred from a factor analysis in a subset of the Korean population. The study included a total of 229 individuals from 38 families. We evaluated a total of 14 facial measurements using 2D digital photographs. We performed factor analysis to infer common latent variables. The heritabilities of 13 facial measurements were statistically significant (p < 0.05) and ranged from 0.25 to 0.61. Of these, the heritability of intercanthal width in the orbital region was found to be the highest (h2 = 0.61, SE = 0.14). Three factors (lower face portion, orbital region, and vertical length) were obtained through factor analysis, where the heritability values ranged from 0.45 to 0.55. The heritability values for each factor were higher than the mean heritability value of individual original measurements. We have confirmed the genetic influence on facial anthropometric traits and suggest a potential way to categorize and analyze the facial portions into different groups. PMID:23843774

  8. The face-selective N170 component is modulated by facial color.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki

    2012-08-01

    Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Face inversion increases attractiveness.

    PubMed

    Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A

    2017-07-01

    Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Body size and allometric variation in facial shape in children.

    PubMed

    Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt

    2018-02-01

    Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.

  11. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  12. Mapping spontaneous facial expression in people with Parkinson's disease: A multiple case study design.

    PubMed

    Gunnery, Sarah D; Naumova, Elena N; Saint-Hilaire, Marie; Tickle-Degnen, Linda

    2017-01-01

    People with Parkinson's disease (PD) often experience a decrease in their facial expressivity, but little is known about how the coordinated movements across regions of the face are impaired in PD. The face has neurologically independent regions that coordinate to articulate distinct social meanings that others perceive as gestalt expressions, and so understanding how different regions of the face are affected is important. Using the Facial Action Coding System, this study comprehensively measured spontaneous facial expression across 600 frames for a multiple case study of people with PD who were rated as having varying degrees of facial expression deficits, and created correlation matrices for frequency and intensity of produced muscle activations across different areas of the face. Data visualization techniques were used to create temporal and correlational mappings of muscle action in the face at different degrees of facial expressivity. Results showed that as severity of facial expression deficit increased, there was a decrease in number, duration, intensity, and coactivation of facial muscle action. This understanding of how regions of the parkinsonian face move independently and in conjunction with other regions will provide a new focus for future research aiming to model how facial expression in PD relates to disease progression, stigma, and quality of life.

  13. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions.

  14. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions. PMID:27630604

  15. Identification of facial shape by applying golden ratio to the facial measurements: an interracial study in malaysian population.

    PubMed

    Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas

    2012-12-01

    The "golden ratio" is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures.

  16. Total Face, Eyelids, Ears, Scalp, and Skeletal Subunit Transplant Cadaver Simulation: The Culmination of Aesthetic, Craniofacial, and Microsurgery Principles.

    PubMed

    Sosin, Michael; Ceradini, Daniel J; Hazen, Alexes; Levine, Jamie P; Staffenberg, David A; Saadeh, Pierre B; Flores, Roberto L; Brecht, Lawrence E; Bernstein, G Leslie; Rodriguez, Eduardo D

    2016-05-01

    The application of aesthetic, craniofacial, and microsurgical principles in the execution of face transplantation may improve outcomes. Optimal soft-tissue face transplantation can be achieved by incorporating subunit facial skeletal replacement and subsequent tissue resuspension. The purpose of this study was to establish a reconstructive solution for a full face and scalp burn and to evaluate outcome precision and consistency. Seven mock face transplants (14 cadavers) were completed in the span of 1 year. Components of the vascularized composite allograft included the eyelids, nose, lips, facial muscles, oral mucosa, total scalp, and ears; and skeletal subunits of the zygoma, nasal bone, and genial segment. Virtual surgical planning was used for osteotomy selection, and to evaluate postoperative precision of hard- and soft-tissue elements. Each transplant experience decreased each subsequent transplant surgical time. Prefabricated cutting guides facilitated a faster dissection of both donor and recipient tissue, requiring minimal alteration to the allograft for proper fixation of bony segments during inset. Regardless of donor-to-recipient size discrepancy, ample soft tissue was available to achieve tension-free allograft inset. Differences between virtual transplant simulation and posttransplant measurements were minimal or insignificant, supporting replicable and precise outcomes. This facial transplant model was designed to optimize reconstruction of extensive soft-tissue defects of the craniofacial region representative of electrical, thermal, and chemical burns, by incorporating skeletal subunits within the allograft. The implementation of aesthetic, craniofacial, and microsurgical principles and computer-assisted technology improves surgical precision, decreases operative time, and may optimize function.

  17. A Quantitative Approach to Determining the Ideal Female Lip Aesthetic and Its Effect on Facial Attractiveness.

    PubMed

    Popenko, Natalie A; Tripathi, Prem B; Devcic, Zlatko; Karimi, Koohyar; Osann, Kathryn; Wong, Brian J F

    2017-07-01

    Aesthetic proportions of the lips and their effect on facial attractiveness are poorly defined. Established guidelines would aid practitioners in achieving optimal aesthetic outcomes during cosmetic augmentation. To assess the most attractive lip dimensions of white women based on attractiveness ranking of surface area, ratio of upper to lower lip, and dimensions of the lip surface area relative to the lower third of the face. In phase 1 of this study, synthetic morph frontal digital images of the faces of 20 white women ages 18 to 25 years old were used to generate 5 varied lip surface areas for each face. These 100 faces were cardinally ranked by attractiveness through our developed conventional and internet-based focus groups by 150 participants. A summed ranking score of each face was plotted to quantify the most attractive surface area. In phase 2 of the study, 4 variants for each face were created with 15 of the most attractive images manipulating upper to lower lip ratios while maintaining the most attractive surface area from phase 1. A total of 60 faces were created, and each ratio was ranked by attractiveness by 428 participants (internet-based focus groups). In phase 3, the surface area from the most attractive faces was used to determine the total lip surface area relative to the lower facial third. Data were collected from March 1 to November 31, 2010, and analyzed from June 1 to October 31, 2016. Most attractive lip surface area, ratio of upper to lower lip, and dimension of the lips relative to the lower facial third. In phase 1, all 100 faces were cardinally ranked by 150 individuals (internet-based focus groups [n = 130] and raters from conventional focus groups [conventional raters] [n = 20]). In phase 2, all 60 faces were cardinally ranked by 428 participants (internet-based focus groups [n = 408] and conventional raters [n = 20]). The surface area that corresponded to the range of 2.0 to 2.5 × 104 pixels represented the highest summed rank, generating a pool of 14 images. This surface area was determined to be the most attractive and corresponded to a 53.5% increase in surface area from the original image. With the highest mean and highest proportions of most attractive rankings, the 1:2 ratio was deemed most attractive. Conversely, the ratio of 2:1 was deemed least attractive, having the lowest mean at 1.61 and the highest proportion of ranks within 1 with 310 votes (72.3%). Using a robust sample size, this study found that the most attractive lip surface area represents a 53.5% increase from baseline, an upper to lower lip ratio of 1:2, and a surface area equal to 9.6% of the lower third of the face. Lip dimensions and ratios derived in this study may provide guidelines in improving overall facial aesthetics and have clinical relevance to the field of facial plastic surgery. NA.

  18. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  19. Sad Facial Expressions Increase Choice Blindness

    PubMed Central

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2018-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926

  20. Sad Facial Expressions Increase Choice Blindness.

    PubMed

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  1. Effect of positive emotion on consolidation of memory for faces: the modulation of facial valence and facial gender.

    PubMed

    Wang, Bo

    2013-01-01

    Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made "remember", "know", and "new" judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.

  2. The relationship between facial emotion recognition and executive functions in first-episode patients with schizophrenia and their siblings.

    PubMed

    Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng

    2015-10-08

    Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P < .05). There were no correlations between executive functions and facial emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.

  3. Recent Advances in Face Lift to Achieve Facial Balance.

    PubMed

    Ilankovan, Velupillai

    2017-03-01

    Facial balance is achieved by correction of facial proportions and the facial contour. Ageing affects this balance in addition to other factors. We have strived to inform all the recent advances in providing this balance. The anatomy of ageing including various changed in clinical features are described. The procedures are explained on the basis of the upper, middle and lower face. Different face lift, neck lift procedures with innovative techniques are demonstrated. The aim is to provide an unoperated balanced facial proportion with zero complication.

  4. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  5. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    PubMed

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-04-01

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.

  6. Quasi-Facial Communication for Online Learning Using 3D Modeling Techniques

    ERIC Educational Resources Information Center

    Wang, Yushun; Zhuang, Yueting

    2008-01-01

    Online interaction with 3D facial animation is an alternative way of face-to-face communication for distance education. 3D facial modeling is essential for virtual educational environments establishment. This article presents a novel 3D facial modeling solution that facilitates quasi-facial communication for online learning. Our algorithm builds…

  7. Neural evidence for the subliminal processing of facial trustworthiness in infancy.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-04-22

    Face evaluation is thought to play a vital role in human social interactions. One prominent aspect is the evaluation of facial signs of trustworthiness, which has been shown to occur reliably, rapidly, and without conscious awareness in adults. Recent developmental work indicates that the sensitivity to facial trustworthiness has early ontogenetic origins as it can already be observed in infancy. However, it is unclear whether infants' sensitivity to facial signs of trustworthiness relies upon conscious processing of a face or, similar to adults, occurs also in response to subliminal faces. To investigate this question, we conducted an event-related brain potential (ERP) study, in which we presented 7-month-old infants with faces varying in trustworthiness. Facial stimuli were presented subliminally (below infants' face visibility threshold) for only 50ms and then masked by presenting a scrambled face image. Our data revealed that infants' ERP responses to subliminally presented faces differed as a function of trustworthiness. Specifically, untrustworthy faces elicited an enhanced negative slow wave (800-1000ms) at frontal and central electrodes. The current findings critically extend prior work by showing that, similar to adults, infants' neural detection of facial signs of trustworthiness occurs also in response to subliminal face. This supports the view that detecting facial trustworthiness is an early developing and automatic process in humans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Centralization or decentralization of facial structures in Korean young adults.

    PubMed

    Yoo, Ja-Young; Kim, Jeong-Nam; Shin, Kang-Jae; Kim, Soon-Heum; Choi, Hyun-Gon; Jeon, Hyun-Soo; Koh, Ki-Seok; Song, Wu-Chul

    2013-05-01

    It is well known that facial beauty is dictated by facial type, and harmony between the eyes, nose, and mouth. Furthermore, facial impression is judged according to the overall facial contour and the relationship between the facial structures. The aims of the present study were to determine the optimal criteria for the assessment of gathering or separation of the facial structures and to define standardized ratios for centralization or decentralization of the facial structures.Four different lengths were measured, and 2 indexes were calculated from standardized photographs of 551 volunteers. Centralization and decentralization were assessed using the width index (interpupillary distance / facial width) and height index (eyes-mouth distance / facial height). The mean ranges of the width index and height index were 42.0 to 45.0 and 36.0 to 39.0, respectively. The width index did not differ with sex, but males had more decentralized faces, and females had more centralized faces, vertically. The incidence rate of decentralized faces among the men was 30.3%, and that of centralized faces among the women was 25.2%.The mean ranges in width and height indexes have been determined in a Korean population. Faces with width and height index scores under and over the median ranges are determined to be "centralized" and "decentralized," respectively.

  9. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  10. The Cambridge Face Memory Test: results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants.

    PubMed

    Duchaine, Brad; Nakayama, Ken

    2006-01-01

    The two standardized tests of face recognition that are widely used suffer from serious shortcomings [Duchaine, B. & Weidenfeld, A. (2003). An evaluation of two commonly used tests of unfamiliar face recognition. Neuropsychologia, 41, 713-720; Duchaine, B. & Nakayama, K. (2004). Developmental prosopagnosia and the Benton Facial Recognition Test. Neurology, 62, 1219-1220]. Images in the Warrington Recognition Memory for Faces test include substantial non-facial information, and the simultaneous presentation of faces in the Benton Facial Recognition Test allows feature matching. Here, we present results from a new test, the Cambridge Face Memory Test, which builds on the strengths of the previous tests. In the test, participants are introduced to six target faces, and then they are tested with forced choice items consisting of three faces, one of which is a target. For each target face, three test items contain views identical to those studied in the introduction, five present novel views, and four present novel views with noise. There are a total of 72 items, and 50 controls averaged 58. To determine whether the test requires the special mechanisms used to recognize upright faces, we conducted two experiments. We predicted that controls would perform much more poorly when the face images are inverted, and as predicted, inverted performance was much worse with a mean of 42. Next we assessed whether eight prosopagnosics would perform poorly on the upright version. The prosopagnosic mean was 37, and six prosopagnosics scored outside the normal range. In contrast, the Warrington test and the Benton test failed to classify a majority of the prosopagnosics as impaired. These results indicate that the new test effectively assesses face recognition across a wide range of abilities.

  11. Face memory and face recognition in children and adolescents with attention deficit hyperactivity disorder: A systematic review.

    PubMed

    Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco

    2018-06-01

    This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  13. Can an anger face also be scared? Malleability of facial expressions.

    PubMed

    Widen, Sherri C; Naab, Pamela

    2012-10-01

    Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking.

  14. Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping

    NASA Astrophysics Data System (ADS)

    Tsagkrasoulis, Dimosthenis; Hysi, Pirro; Spector, Tim; Montana, Giovanni

    2017-04-01

    The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).

  15. Three-dimensional printing for restoration of the donor face: A new digital technique tested and used in the first facial allotransplantation patient in Finland.

    PubMed

    Mäkitie, A A; Salmi, M; Lindford, A; Tuomi, J; Lassus, P

    2016-12-01

    Prosthetic mask restoration of the donor face is essential in current facial transplant protocols. The aim was to develop a new three-dimensional (3D) printing (additive manufacturing; AM) process for the production of a donor face mask that fulfilled the requirements for facial restoration after facial harvest. A digital image of a single test person's face was obtained in a standardized setting and subjected to three different image processing techniques. These data were used for the 3D modeling and printing of a donor face mask. The process was also tested in a cadaver setting and ultimately used clinically in a donor patient after facial allograft harvest. and Conclusions: All the three developed and tested techniques enabled the 3D printing of a custom-made face mask in a timely manner that is almost an exact replica of the donor patient's face. This technique was successfully used in a facial allotransplantation donor patient. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  17. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  18. Parallel Processing in Face Perception

    ERIC Educational Resources Information Center

    Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.

    2010-01-01

    The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…

  19. Measuring facial cooling in outdoor windy winter conditions: an exploratory study.

    PubMed

    Briggs, Andrew G S; Gillespie, Terry J; Brown, Robert D

    2017-10-01

    Winter clothing provides insulation for almost all of a person's body, but in most situations, a person's face remains uncovered even in cold windy weather. This exploratory study used thermal imagery to record the rate of cooling of the faces of volunteers in a range of winter air temperatures and wind speeds. Different areas of the faces cooled at different rates with the areas around the eyes and neck cooling at the slowest rate, and the nose and cheeks cooling at the fastest rate. In all cases, the faces cooled at an approximately logarithmic decay for the first few minutes. This was followed by a small rise in the temperature of the face for a few minutes, which was then followed by an uninterrupted logarithmic decay. Volunteers were told to indicate when their face was so cold that they wanted to end the test. The total amount of time and the facial temperature at the end of each trial were recorded. The results provide insight into the way faces cool in uncontrolled, outdoor winter conditions.

  20. Automatically Log Off Upon Disappearance of Facial Image

    DTIC Science & Technology

    2005-03-01

    log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only

  1. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  2. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Identification of Facial Shape by Applying Golden Ratio to the Facial Measurements: An Interracial Study in Malaysian Population

    PubMed Central

    Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas

    2012-01-01

    Background: The “golden ratio” is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. Aims: This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. Materials and Methods: We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. Results: The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. Conclusion: As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures. PMID:23272303

  4. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  5. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    PubMed

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  6. Looking Like a Leader–Facial Shape Predicts Perceived Height and Leadership Ability

    PubMed Central

    Re, Daniel E.; Hunter, David W.; Coetzee, Vinet; Tiddeman, Bernard P.; Xiao, Dengke; DeBruine, Lisa M.; Jones, Benedict C.; Perrett, David I.

    2013-01-01

    Judgments of leadership ability from face images predict the outcomes of actual political elections and are correlated with leadership success in the corporate world. The specific facial cues that people use to judge leadership remain unclear, however. Physical height is also associated with political and organizational success, raising the possibility that facial cues of height contribute to leadership perceptions. Consequently, we assessed whether cues to height exist in the face and, if so, whether they are associated with perception of leadership ability. We found that facial cues to perceived height had a strong relationship with perceived leadership ability. Furthermore, when allowed to manually manipulate faces, participants increased facial cues associated with perceived height in order to maximize leadership perception. A morphometric analysis of face shape revealed that structural facial masculinity was not responsible for the relationship between perceived height and perceived leadership ability. Given the prominence of facial appearance in making social judgments, facial cues to perceived height may have a significant influence on leadership selection. PMID:24324651

  7. Exploring the nature of facial affect processing deficits in schizophrenia.

    PubMed

    van 't Wout, Mascha; Aleman, André; Kessels, Roy P C; Cahn, Wiepke; de Haan, Edward H F; Kahn, René S

    2007-04-15

    Schizophrenia has been associated with deficits in facial affect processing, especially negative emotions. However, the exact nature of the deficit remains unclear. The aim of the present study was to investigate whether schizophrenia patients have problems in automatic allocation of attention as well as in controlled evaluation of facial affect. Thirty-seven patients with schizophrenia were compared with 41 control subjects on incidental facial affect processing (gender decision of faces with a fearful, angry, happy, disgusted, and neutral expression) and degraded facial affect labeling (labeling of fearful, angry, happy, and neutral faces). The groups were matched on estimates of verbal and performance intelligence (National Adult Reading Test; Raven's Matrices), general face recognition ability (Benton Face Recognition), and other demographic variables. The results showed that patients with schizophrenia as well as control subjects demonstrate the normal threat-related interference during incidental facial affect processing. Conversely, on controlled evaluation patients were specifically worse in the labeling of fearful faces. In particular, patients with high levels of negative symptoms may be characterized by deficits in labeling fear. We suggest that patients with schizophrenia show no evidence of deficits in the automatic allocation of attention resources to fearful (threat-indicating) faces, but have a deficit in the controlled processing of facial emotions that may be specific for fearful faces.

  8. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  9. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    PubMed

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  10. Assessment of facial golden proportions among young Japanese women.

    PubMed

    Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C

    2009-08-01

    Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).

  11. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  12. Face-masks for facial atopic eczema: consider a hydrocolloid dressing.

    PubMed

    Rademaker, Marius

    2013-08-01

    Facial involvement of atopic eczema in young children can be difficult to manage. Chronic scratching and rubbing, combined with parental reluctance to use topical corticosteroids on the face, often results in recalcitrant facial eczema. While wet wraps are a useful management option for moderate/severe atopic eczema involving the trunk and limbs they are difficult to use on the face. We describe the use of a face-mask using a widely available adhesive hydrocolloid dressing (DuoDerm extra thin) in three children with recalcitrant facial atopic eczema. Symptomatic control of itch or soreness was obtained within hours and the facial atopic eczema was markedly improved by 7 days. The face-masks were easy to apply, each lasting 1-4 days. One patient had a single adjuvant application of a potent topical corticosteroid under the hydrocolloid dressing. All three patients had long remissions (greater than 3 months) of their facial eczema, although all continued to have significant eczema involving their trunk and limbs. Face-masks made from hydrocolloid dressings, with or without topical corticosteroids, are worth considering in children with recalcitrant facial eczema. © 2012 The Author. Australasian Journal of Dermatology © 2012 The Australasian College of Dermatologists.

  13. Differences between Caucasian and Asian attractive faces.

    PubMed

    Rhee, S C

    2018-02-01

    There are discrepancies between the public's current beauty desires and conventional theories and historical rules regarding facial beauty. This photogrammetric study aims to describe in detail mathematical differences in facial configuration between attractive Caucasian and attractive Asian faces. To analyse the structural differences between attractive Caucasian and attractive Asian faces, frontal face and lateral face views for each race were morphed; facial landmarks were defined, and the relative photographic pixel distances and angles were measured. Absolute values were acquired by arithmetic conversion for comparison. The data indicate that some conventional beliefs of facial attractiveness can be applied but others are no longer valid in explaining perspectives of beauty between Caucasians and Asians. Racial differences in the perceptions of attractive faces were evident. Common features as a phenomenon of global fusion in the perspectives on facial beauty were revealed. Beauty standards differ with race and ethnicity, and some conventional rules for ideal facial attractiveness were found to be inappropriate. We must reexamine old principles of facial beauty and continue to fundamentally question it according to its racial, cultural, and neuropsychological aspects. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Vertical and horizontal facial proportions of Indian American men

    PubMed Central

    2016-01-01

    The importance of understanding all gender facial differences is critical in providing a successful cosmetic outcome. Men are a growing segment of the cosmetic industry. Understanding of the male face and its appropriate treatment with minimally invasive cosmetic procedures are essential. The aim was to investigate various facial ratios in Indian American men and to compare them with the Indian and Caucasian norms. Additionally, we wanted to evaluate whether these values satisfy golden and silver ratios. Direct facial anthropometric measurements were made using a digital caliper in 100 Indian American men students (18–30 years) at the American University of Antigua (AUA), Antigua. A set of facial ratios were calculated and compared with coefficients of variation (CV). Most of the facial ratios had small CV thus making them highly reliable due to reduced intra-sample variability. The upper face to face height and mandibulo upper face height indices were close to golden ratios whereas mandibulo lower face height, upper face height biocular width, and nasal indices were close to silver ratios. There was significant difference in most of the values when compared with previous studies. The present facial ratios data can be used as a reference value for Indian American men. PMID:27382514

  15. Vertical and horizontal facial proportions of Indian American men.

    PubMed

    Sadacharan, Chakravarthy Marx

    2016-06-01

    The importance of understanding all gender facial differences is critical in providing a successful cosmetic outcome. Men are a growing segment of the cosmetic industry. Understanding of the male face and its appropriate treatment with minimally invasive cosmetic procedures are essential. The aim was to investigate various facial ratios in Indian American men and to compare them with the Indian and Caucasian norms. Additionally, we wanted to evaluate whether these values satisfy golden and silver ratios. Direct facial anthropometric measurements were made using a digital caliper in 100 Indian American men students (18-30 years) at the American University of Antigua (AUA), Antigua. A set of facial ratios were calculated and compared with coefficients of variation (CV). Most of the facial ratios had small CV thus making them highly reliable due to reduced intra-sample variability. The upper face to face height and mandibulo upper face height indices were close to golden ratios whereas mandibulo lower face height, upper face height biocular width, and nasal indices were close to silver ratios. There was significant difference in most of the values when compared with previous studies. The present facial ratios data can be used as a reference value for Indian American men.

  16. Emotion elicitor or emotion messenger? Subliminal priming reveals two faces of facial expressions.

    PubMed

    Ruys, Kirsten I; Stapel, Diederik A

    2008-06-01

    Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.

  17. Social perception and aging: The relationship between aging and the perception of subtle changes in facial happiness and identity.

    PubMed

    Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J

    2017-09-01

    Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  19. Orienting to face expression during encoding improves men's recognition of own gender faces.

    PubMed

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. What's in a "face file"? Feature binding with facial identity, emotion, and gaze direction.

    PubMed

    Fitousi, Daniel

    2017-07-01

    A series of four experiments investigated the binding of facial (i.e., facial identity, emotion, and gaze direction) and non-facial (i.e., spatial location and response location) attributes. Evidence for the creation and retrieval of temporary memory face structures across perception and action has been adduced. These episodic structures-dubbed herein "face files"-consisted of both visuo-visuo and visuo-motor bindings. Feature binding was indicated by partial-repetition costs. That is repeating a combination of facial features or altering them altogether, led to faster responses than repeating or alternating only one of the features. Taken together, the results indicate that: (a) "face files" affect both action and perception mechanisms, (b) binding can take place with facial dimensions and is not restricted to low-level features (Hommel, Visual Cognition 5:183-216, 1998), and (c) the binding of facial and non-facial attributes is facilitated if the dimensions share common spatial or motor codes. The theoretical contributions of these results to "person construal" theories (Freeman, & Ambady, Psychological Science, 20(10), 1183-1188, 2011), as well as to face recognition models (Haxby, Hoffman, & Gobbini, Biological Psychiatry, 51(1), 59-67, 2000) are discussed.

  1. Facial color processing in the face-selective regions: an fMRI study.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Tanabe, Hiroki C; Sadato, Norihiro; Nakauchi, Shigeki

    2014-09-01

    Facial color is important information for social communication as it provides important clues to recognize a person's emotion and health condition. Our previous EEG study suggested that N170 at the left occipito-temporal site is related to facial color processing (Nakajima et al., [2012]: Neuropsychologia 50:2499-2505). However, because of the low spatial resolution of EEG experiment, the brain region is involved in facial color processing remains controversial. In the present study, we examined the neural substrates of facial color processing using functional magnetic resonance imaging (fMRI). We measured brain activity from 25 subjects during the presentation of natural- and bluish-colored face and their scrambled images. The bilateral fusiform face (FFA) area and occipital face area (OFA) were localized by the contrast of natural-colored faces versus natural-colored scrambled images. Moreover, region of interest (ROI) analysis showed that the left FFA was sensitive to facial color, whereas the right FFA and the right and left OFA were insensitive to facial color. In combination with our previous EEG results, these data suggest that the left FFA may play an important role in facial color processing. Copyright © 2014 Wiley Periodicals, Inc.

  2. Association Between Facial Rejuvenation and Observer Ratings of Youth, Attractiveness, Success, and Health

    PubMed Central

    Bater, Kristin L.; Papel, Ira D.; Kontis, Theda C.; Byrne, Patrick J.; Boahene, Kofi D. O.; Nellis, Jason C.; Ishii, Masaru

    2017-01-01

    Importance Surgical procedures for the aging face—including face-lift, blepharoplasty, and brow-lift—consistently rank among the most popular cosmetic services sought by patients. Although these surgical procedures are broadly classified as procedures that restore a youthful appearance, they may improve societal perceptions of attractiveness, success, and health, conferring an even larger social benefit than just restoring a youthful appearance to the face. Objectives To determine if face-lift and upper facial rejuvenation surgery improve observer ratings of age, attractiveness, success, and health and to quantify the effect of facial rejuvenation surgery on each individual domain. Design, Setting, and Participants A randomized clinical experiment was performed from August 30 to September 18, 2016, using web-based surveys featuring photographs of patients before and after facial rejuvenation surgery. Observers were randomly shown independent images of the 12 patients; within a given survey, observers saw either the preoperative or postoperative photograph of each patient to reduce the possibility of priming. Observers evaluated patient age using a slider bar ranging from 30 to 80 years that could be moved up or down in 1-year increments, and they ranked perceived attractiveness, success, and health using a 100-point visual analog scale. The bar on the 100-point scale began at 50; moving the bar to the right corresponded to a more positive rating in these measures and moving the bar to the left, a more negative rating. Main Outcomes and Measures A multivariate mixed-effects regression model was used to understand the effect of face-lift and upper facial rejuvenation surgery on observer perceptions while accounting for individual biases of the participants. Ordinal rank change was calculated to understand the clinical effect size of changes across the various domains after surgery. Results A total of 504 participants (333 women, 165 men, and 6 unspecified; mean age, 29 [range, 18-70] years) successfully completed the survey. A multivariate mixed-effects regression model revealed a statistically significant change in age (–4.61 years; 95% CI, –4.97 to –4.25) and attractiveness (6.72; 95% CI, 5.96-7.47) following facial rejuvenation surgery. Observer-perceived success (3.85; 95% CI, 3.12-4.57) and health (7.65; 95% CI; 6.87-8.42) also increased significantly as a result of facial rejuvenation surgery. Conclusions and Relevance The data presented in this study demonstrate that patients are perceived as younger and more attractive by the casual observer after undergoing face-lift and upper facial rejuvenation surgery. These procedures also improved ratings of perceived success and health in our patient population. These findings suggest that facial rejuvenation surgery conveys an even larger societal benefit than merely restoring a youthful appearance to the face. Level of Evidence NA. PMID:28448667

  3. The light-makeup advantage in facial processing: Evidence from event-related potentials.

    PubMed

    Tagai, Keiko; Shimakura, Hitomi; Isobe, Hiroko; Nittono, Hiroshi

    2017-01-01

    The effects of makeup on attractiveness have been evaluated using mainly subjective measures. In this study, event-related brain potentials (ERPs) were recorded from a total of 45 Japanese women (n = 23 and n = 22 for Experiment 1 and 2, respectively) to examine the neural processing of faces with no makeup, light makeup, and heavy makeup. To have the participants look at each face carefully, an identity judgement task was used: they were asked to judge whether the two faces presented in succession were of the same person or not. The ERP waveforms in response to the first faces were analyzed. In two experiments with different stimulus probabilities, the amplitudes of N170 and vertex positive potential (VPP) were smaller for faces with light makeup than for faces with heavy makeup or no makeup. The P1 amplitude did not differ between facial types. In a subsequent rating phase, faces with light makeup were rated as more attractive than faces with heavy makeup and no makeup. The results suggest that the processing fluency of faces with light makeup is one of the reasons why light makeup is preferred to heavy makeup and no makeup in daily life.

  4. The light-makeup advantage in facial processing: Evidence from event-related potentials

    PubMed Central

    Tagai, Keiko; Shimakura, Hitomi; Isobe, Hiroko; Nittono, Hiroshi

    2017-01-01

    The effects of makeup on attractiveness have been evaluated using mainly subjective measures. In this study, event-related brain potentials (ERPs) were recorded from a total of 45 Japanese women (n = 23 and n = 22 for Experiment 1 and 2, respectively) to examine the neural processing of faces with no makeup, light makeup, and heavy makeup. To have the participants look at each face carefully, an identity judgement task was used: they were asked to judge whether the two faces presented in succession were of the same person or not. The ERP waveforms in response to the first faces were analyzed. In two experiments with different stimulus probabilities, the amplitudes of N170 and vertex positive potential (VPP) were smaller for faces with light makeup than for faces with heavy makeup or no makeup. The P1 amplitude did not differ between facial types. In a subsequent rating phase, faces with light makeup were rated as more attractive than faces with heavy makeup and no makeup. The results suggest that the processing fluency of faces with light makeup is one of the reasons why light makeup is preferred to heavy makeup and no makeup in daily life. PMID:28234959

  5. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  6. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  7. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age

    PubMed Central

    Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique

    2017-01-01

    Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast—the color and luminance difference between facial features and the surrounding skin—is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20–80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger. PMID:28790941

  9. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age.

    PubMed

    Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique

    2017-01-01

    Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast-the color and luminance difference between facial features and the surrounding skin-is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20-80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.

  10. In Your Face: Startle to Emotional Facial Expressions Depends on Face Direction.

    PubMed

    Åsli, Ole; Michalsen, Henriette; Øvervoll, Morten

    2017-01-01

    Although faces are often included in the broad category of emotional visual stimuli, the affective impact of different facial expressions is not well documented. The present experiment investigated startle electromyographic responses to pictures of neutral, happy, angry, and fearful facial expressions, with a frontal face direction (directed) and at a 45° angle to the left (averted). Results showed that emotional facial expressions interact with face direction to produce startle potentiation: Greater responses were found for angry expressions, compared with fear and neutrality, with directed faces. When faces were averted, fear and neutrality produced larger responses compared with anger and happiness. These results are in line with the notion that startle is potentiated to stimuli signaling threat. That is, a forward directed angry face may signal a threat toward the observer, and a fearful face directed to the side may signal a possible threat in the environment.

  11. Attractive faces temporally modulate visual attention

    PubMed Central

    Nakamura, Koyo; Kawabata, Hideaki

    2014-01-01

    Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994

  12. Three-dimensional analysis of facial morphology.

    PubMed

    Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng

    2014-09-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.

  13. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  14. [Emotional facial expression recognition impairment in Parkinson disease].

    PubMed

    Lachenal-Chevallet, Karine; Bediou, Benoit; Bouvard, Martine; Thobois, Stéphane; Broussolle, Emmanuel; Vighetto, Alain; Krolak-Salmon, Pierre

    2006-03-01

    some behavioral disturbances observed in Parkinson's disease (PD) could be related to impaired recognition of various social messages particularly emotional facial expressions. facial expression recognition was assessed using morphed faces (five emotions: happiness, fear, anger, disgust, neutral), and compared to gender recognition and general cognitive assessment in 12 patients with Parkinson's disease and 14 controls subjects. facial expression recognition was impaired among patients, whereas gender recognitions, visuo-perceptive capacities and total efficiency were preserved. Post hoc analyses disclosed a deficit for fear and disgust recognition compared to control subjects. the impairment of emotional facial expression recognition in PD appears independent of other cognitive deficits. This impairment may be related to the dopaminergic depletion in basal ganglia and limbic brain regions. They could take a part in psycho-behavioral disorders and particularly in communication disorders observed in Parkinson's disease patients.

  15. Modulation of Alpha Oscillations in the Human EEG with Facial Preference

    PubMed Central

    Kang, Jae-Hwan; Kim, Su Jin; Cho, Yang Seok; Kim, Sung-Phil

    2015-01-01

    Facial preference that results from the processing of facial information plays an important role in social interactions as well as the selection of a mate, friend, candidate, or favorite actor. However, it still remains elusive which brain regions are implicated in the neural mechanisms underlying facial preference, and how neural activities in these regions are modulated during the formation of facial preference. In the present study, we investigated the modulation of electroencephalography (EEG) oscillatory power with facial preference. For the reliable assessments of facial preference, we designed a series of passive viewing and active choice tasks. In the former task, twenty-four face stimuli were passively viewed by participants for multiple times in random order. In the latter task, the same stimuli were then evaluated by participants for their facial preference judgments. In both tasks, significant differences between the preferred and non-preferred faces groups were found in alpha band power (8–13 Hz) but not in other frequency bands. The preferred faces generated more decreases in alpha power. During the passive viewing task, significant differences in alpha power between the preferred and non-preferred face groups were observed at the left frontal regions in the early (0.15–0.4 s) period during the 1-s presentation. By contrast, during the active choice task when participants consecutively watched the first and second face for 1 s and then selected the preferred one, an alpha power difference was found for the late (0.65–0.8 s) period over the whole brain during the first face presentation and over the posterior regions during the second face presentation. These results demonstrate that the modulation of alpha activity by facial preference is a top-down process, which requires additional cognitive resources to facilitate information processing of the preferred faces that capture more visual attention than the non-preferred faces. PMID:26394328

  16. Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention

    PubMed Central

    Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang

    2016-01-01

    Past research has proven human’s extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention. PMID:27848992

  17. Unconscious processing of facial attractiveness: invisible attractive faces orient visual attention.

    PubMed

    Hung, Shao-Min; Nieh, Chih-Hsuan; Hsieh, Po-Jang

    2016-11-16

    Past research has proven human's extraordinary ability to extract information from a face in the blink of an eye, including its emotion, gaze direction, and attractiveness. However, it remains elusive whether facial attractiveness can be processed and influences our behaviors in the complete absence of conscious awareness. Here we demonstrate unconscious processing of facial attractiveness with three distinct approaches. In Experiment 1, the time taken for faces to break interocular suppression was measured. The results showed that attractive faces enjoyed the privilege of breaking suppression and reaching consciousness earlier. In Experiment 2, we further showed that attractive faces had lower visibility thresholds, again suggesting that facial attractiveness could be processed more easily to reach consciousness. Crucially, in Experiment 3, a significant decrease of accuracy on an orientation discrimination task subsequent to an invisible attractive face showed that attractive faces, albeit suppressed and invisible, still exerted an effect by orienting attention. Taken together, for the first time, we show that facial attractiveness can be processed in the complete absence of consciousness, and an unconscious attractive face is still capable of directing our attention.

  18. Facial transplantation: A concise update

    PubMed Central

    Barrera-Pulido, Fernando; Gomez-Cia, Tomas; Sicilia-Castro, Domingo; Garcia-Perla-Garcia, Alberto; Gacto-Sanchez, Purificacion; Hernandez-Guisado, Jose-Maria; Lagares-Borrego, Araceli; Narros-Gimenez, Rocio; Gonzalez-Padilla, Juan D.

    2013-01-01

    Objectives: Update on clinical results obtained by the first worldwide facial transplantation teams as well as review of the literature concerning the main surgical, immunological, ethical, and follow-up aspects described on facial transplanted patients. Study design: MEDLINE search of articles published on “face transplantation” until March 2012. Results: Eighteen clinical cases were studied. The mean patient age was 37.5 years, with a higher prevalence of men. Main surgical indication was gunshot injuries (6 patients). All patients had previously undergone multiple conventional surgical reconstructive procedures which had failed. Altogether 8 transplant teams belonging to 4 countries participated. Thirteen partial face transplantations and 5 full face transplantations have been performed. Allografts are varied according to face anatomical components and the amount of skin, muscle, bone, and other tissues included, though all were grafted successfully and remained viable without significant postoperative surgical complications. The patient with the longest follow-up was 5 years. Two patients died 2 and 27 months after transplantation. Conclusions: Clinical experience has demonstrated the feasibility of facial transplantation as a valuable reconstructive option, but it still remains considered as an experimental procedure with unresolved issues to settle down. Results show that from a clinical, technical, and immunological standpoint, facial transplantation has achieved functional, aesthetic, and social rehabilitation in severely facial disfigured patients. Key words:Face transplantation, composite tissue transplantation, face allograft, facial reconstruction, outcomes and complications of face transplantation. PMID:23229268

  19. Face-selective regions differ in their ability to classify facial expressions

    PubMed Central

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-01-01

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: The amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  20. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  1. Automatic integration of social information in emotion recognition.

    PubMed

    Mumenthaler, Christian; Sander, David

    2015-04-01

    This study investigated the automaticity of the influence of social inference on emotion recognition. Participants were asked to recognize dynamic facial expressions of emotion (fear or anger in Experiment 1 and blends of fear and surprise or of anger and disgust in Experiment 2) in a target face presented at the center of a screen while a subliminal contextual face appearing in the periphery expressed an emotion (fear or anger) or not (neutral) and either looked at the target face or not. Results of Experiment 1 revealed that recognition of the target emotion of fear was improved when a subliminal angry contextual face gazed toward-rather than away from-the fearful face. We replicated this effect in Experiment 2, in which facial expression blends of fear and surprise were more often and more rapidly categorized as expressing fear when the subliminal contextual face expressed anger and gazed toward-rather than away from-the target face. With the contextual face appearing for 30 ms in total, including only 10 ms of emotion expression, and being immediately masked, our data provide the first evidence that social influence on emotion recognition can occur automatically. (c) 2015 APA, all rights reserved).

  2. Mapping Teacher-Faces

    ERIC Educational Resources Information Center

    Thompson, Greg; Cook, Ian

    2013-01-01

    This paper uses Deleuze and Guattari's concept of faciality to analyse the teacher's face. According to Deleuze and Guattari, the teacher-face is a special type of face because it is an "overcoded" face produced in specific landscapes. This paper suggests four limit-faces for teacher faciality that actualise different mixes of significance and…

  3. Are Attractive Men's Faces Masculine or Feminine? The Importance of Type of Facial Stimuli

    ERIC Educational Resources Information Center

    Rennels, Jennifer L.; Bronstad, P. Matthew; Langlois, Judith H.

    2008-01-01

    The authors investigated whether differences in facial stimuli could explain the inconsistencies in the facial attractiveness literature regarding whether adults prefer more masculine- or more feminine-looking male faces. Their results demonstrated that use of a female average to dimorphically transform a male facial average produced stimuli that…

  4. Brain potentials indicate the effect of other observers' emotions on perceptions of facial attractiveness.

    PubMed

    Huang, Yujing; Pan, Xuwei; Mo, Yan; Ma, Qingguo

    2016-03-23

    Perceptions of facial attractiveness are sensitive to emotional expression of the perceived face. However, little is known about whether the emotional expression on the face of another observer of the perceived face may have an effect on perceptions of facial attractiveness. The present study used event-related potential technique to examine social influence of the emotional expression on the face of another observer of the perceived face on perceptions of facial attractiveness. The experiment consisted of two phases. In the first phase, a neutral target face was paired with two images of individuals gazing at the target face with smiling, fearful or neutral expressions. In the second phase, participants were asked to judge the attractiveness of the target face. We found that a target face was more attractive when other observers positively gazing at the target face in contrast to the condition when other observers were negative. Additionally, the results of brain potentials showed that the visual positive component P3 with peak latency from 270 to 330 ms was larger after participants observed the target face paired with smiling individuals than the target face paired with neutral individuals. These findings suggested that facial attractiveness of an individual may be influenced by the emotional expression on the face of another observer of the perceived face. Copyright © 2016. Published by Elsevier Ireland Ltd.

  5. The Relationship between Processing Facial Identity and Emotional Expression in 8-Month-Old Infants

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Jovanovic, Bianca

    2010-01-01

    In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight-month-old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a…

  6. Evidence of a Shift from Featural to Configural Face Processing in Infancy

    ERIC Educational Resources Information Center

    Schwarzer, Gudrun; Zauner, Nicola; Jovanovic, Bianca

    2007-01-01

    Two experiments examined whether 4-, 6-, and 10-month-old infants process natural looking faces by feature, i.e. processing internal facial features independently of the facial context or holistically by processing the features in conjunction with the facial context. Infants were habituated to two faces and looking time was measured. After…

  7. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    ERIC Educational Resources Information Center

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  8. Sex differences in social cognition: The case of face processing.

    PubMed

    Proverbio, Alice Mado

    2017-01-02

    Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Validating Facial Aesthetic Surgery Results with the FACE-Q.

    PubMed

    Kappos, Elisabeth A; Temp, Mathias; Schaefer, Dirk J; Haug, Martin; Kalbermatten, Daniel F; Toth, Bryant A

    2017-04-01

    In aesthetic clinical practice, surgical outcome is best measured by patient satisfaction and quality of life. For many years, there has been a lack of validated questionnaires. Recently, the FACE-Q was introduced, and the authors present the largest series of face-lift patients evaluated by the FACE-Q with the longest follow-up to date. Two hundred consecutive patients were identified who underwent high-superficial musculoaponeurotic system face lifts, with or without additional facial rejuvenation procedures, between January of 2005 and January of 2015. Patients were sent eight FACE-Q scales and were asked to answer questions with regard to their satisfaction. Rank analysis of covariance was used to compare different subgroups. The response rate was 38 percent. Combination of face lift with other procedures resulted in higher satisfaction than face lift alone (p < 0.05). Patients who underwent lipofilling as part of their face lift showed higher satisfaction than patients without lipofilling in three subscales (p < 0.05). Facial rejuvenation surgery, combining a high-superficial musculoaponeurotic system face lift with lipofilling and/or other facial rejuvenation procedures, resulted in a high level of patient satisfaction. The authors recommend the implementation of the FACE-Q by physicians involved in aesthetic facial surgery, to validate their clinical outcomes from a patient's perspective.

  10. Content Validity of Patient-Reported Outcome Instruments used with Pediatric Patients with Facial Differences: A Systematic Review.

    PubMed

    Wickert, Natasha M; Wong Riff, Karen W Y; Mansour, Mark; Forrest, Christopher R; Goodacre, Timothy E E; Pusic, Andrea L; Klassen, Anne F

    2018-01-01

    Objective The aim of this systematic review was to identify patient-reported outcome (PRO) instruments used in research with children/youth with conditions associated with facial differences to identify the health concepts measured. Design MEDLINE, EMBASE, CINAHL, and PsycINFO were searched from 2004 to 2016 to identify PRO instruments used in acne vulgaris, birthmarks, burns, ear anomalies, facial asymmetries, and facial paralysis patients. We performed a content analysis whereby the items were coded to identify concepts and categorized as positive or negative content or phrasing. Results A total of 7,835 articles were screened; 6 generic and 11 condition-specific PRO instruments were used in 96 publications. Condition-specific instruments were for acne (four), oral health (two), dermatology (one), facial asymmetries (two), microtia (one), and burns (one). The PRO instruments provided 554 items (295 generic; 259 condition specific) that were sorted into 4 domains, 11 subdomains, and 91 health concepts. The most common domain was psychological (n = 224 items). Of the identified items, 76% had negative content or phrasing (e.g., "Because of the way my face looks I wish I had never been born"). Given the small number of items measuring facial appearance (n = 19) and function (n = 22), the PRO instruments reviewed lacked content validity for patients whose condition impacted facial function and/or appearance. Conclusions Treatments can change facial appearance and function. This review draws attention to a problem with content validity in existing PRO instruments. Our team is now developing a new PRO instrument called FACE-Q Kids to address this problem.

  11. The surgical management of facial trauma in British soldiers during combat operations in Afghanistan.

    PubMed

    Wordsworth, Matthew; Thomas, Rachael; Breeze, John; Evriviades, Demetrius; Baden, James; Hettiaratchy, Shehan

    2017-01-01

    The recent Afghanistan conflict caused a higher proportion of casualties with facial injuries due to both the increasing effectiveness of combat body armour and the insurgent use of the improvised explosive device (IED). The aim of this study was to describe all injuries to the face sustained by UK service personnel from blast or gunshot wounds during the highest intensity period of combat operations in Afghanistan. Hospital records and Joint Theatre Trauma Registry data were collected for all UK service personnel killed or wounded by blast and gunshot wounds in Afghanistan between 01 April 2006 and 01 March 2013. 566 casualties were identified, 504 from blast and 52 from gunshot injuries. 75% of blast injury casualties survived and the IED was the most common mechanism of injury with the mid-face the most commonly affected facial region. In blast injuries a facial fracture was a significant marker for increased total injury severity score. A facial gunshot wound was fatal in 53% of cases. The majority of survivors required a single surgical procedure for the facial injury but further reconstruction was required in 156 of the 375 of survivors aero medically evacuated to the UK. The presence and pattern of facial fractures was significantly different in survivors and fatalities, which may reflect the power of the blast that these cohorts were exposed to. The Anatomical Injury Scoring of the Injury Severity Scale was inadequate for determining the extent of soft tissue facial injuries and did not predict morbidity of the injury. Copyright © 2016. Published by Elsevier Ltd.

  12. Enhanced Facial Symmetry Assessment in Orthodontists

    PubMed Central

    Jackson, Tate H.; Clark, Kait; Mitroff, Stephen R.

    2013-01-01

    Assessing facial symmetry is an evolutionarily important process, which suggests that individual differences in this ability should exist. As existing data are inconclusive, the current study explored whether a group trained in facial symmetry assessment, orthodontists, possessed enhanced abilities. Symmetry assessment was measured using face and non-face stimuli among orthodontic residents and two control groups: university participants with no symmetry training and airport security luggage screeners, a group previously shown to possess expert visual search skills unrelated to facial symmetry. Orthodontic residents were more accurate at assessing symmetry in both upright and inverted faces compared to both control groups, but not for non-face stimuli. These differences are not likely due to motivational biases or a speed-accuracy tradeoff—orthodontic residents were slower than the university participants but not the security screeners. Understanding such individual differences in facial symmetry assessment may inform the perception of facial attractiveness. PMID:24319342

  13. Combined approach for facial contour restoration: treatment of malar and cheek areas during rhytidectomy.

    PubMed

    Tapia, Antonio; Ruiz-de-Erenchun, Richard; Rengifo, Miguel

    2006-08-01

    One of the main objectives in facial lifting is to achieve an adequate facial contour, to enhance facial characteristics. Sometimes, facial areas are more or less accentuated, resulting in an unbalanced or inharmonious facial contour; this can be resolved in the context of a face lift. In the middle third of the face, two anatomical regions define the facial silhouette: the malar contour, with its bone support and superficial structures and, at the cheek level, intimately associated with the mastication system and the facial nerve, the buccal fat pad or Bichat fat pad. The authors describe their experience since 1998 using the double approach to malar atrophy and buccal fat pad hypertrophy in 194 patients with facial aging signs undergoing a face lift. All patients were offered a face lift with partial resection of the fat pad through facial incisions and a stronger malar projection using an inverse superficial musculoaponeurotic system flap. The main complications observed regarding this surgical technique, in order of appearance, were light asymmetry, caused by a persistent hematoma or swelling; paresthesia of the buccal and zygomatic branches, which resolved spontaneously; and a light sinking of the cheek caused by excessive resection. One patient underwent correction with a fat injection. The superior superficial musculoaponeurotic system flap and buccal fat pad resection provided excellent aesthetic results for a more harmonic and proportioned facial contour during rhytidectomy. Particularly in patients with round faces, the authors were able to obtain permanent malar symmetry and projection in addition to diminishing the cheek fullness.

  14. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  15. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    PubMed Central

    Daini, Roberta; Comparetti, Chiara M.; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition. PMID:25520643

  16. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    PubMed

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  17. Recovery of facial expressions using functional electrical stimulation after full-face transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Çolak, Ömer Halil

    2018-03-06

    We assessed the recovery of 2 face transplantation patients with measures of complexity during neuromuscular rehabilitation. Cognitive rehabilitation methods and functional electrical stimulation were used to improve facial emotional expressions of full-face transplantation patients for 5 months. Rehabilitation and analyses were conducted at approximately 3 years after full facial transplantation in the patient group. We report complexity analysis of surface electromyography signals of these two patients in comparison to the results of 10 healthy individuals. Facial surface electromyography data were collected during 6 basic emotional expressions and 4 primary facial movements from 2 full-face transplantation patients and 10 healthy individuals to determine a strategy of functional electrical stimulation and understand the mechanisms of rehabilitation. A new personalized rehabilitation technique was developed using the wavelet packet method. Rehabilitation sessions were applied twice a month for 5 months. Subsequently, motor and functional progress was assessed by comparing the fuzzy entropy of surface electromyography data against the results obtained from patients before rehabilitation and the mean results obtained from 10 healthy subjects. At the end of personalized rehabilitation, the patient group showed improvements in their facial symmetry and their ability to perform basic facial expressions and primary facial movements. Similarity in the pattern of fuzzy entropy for facial expressions between the patient group and healthy individuals increased. Synkinesis was detected during primary facial movements in the patient group, and one patient showed synkinesis during the happiness expression. Synkinesis in the lower face region of one of the patients was eliminated for the lid tightening movement. The recovery of emotional expressions after personalized rehabilitation was satisfactory to the patients. The assessment with complexity analysis of sEMG data can be used for developing new neurorehabilitation techniques and detecting synkinesis after full-face transplantation.

  18. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    PubMed

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  19. Facial Diversity and Infant Preferences for Attractive Faces.

    ERIC Educational Resources Information Center

    Langlois, Judith H.; And Others

    1991-01-01

    Three studies examined infant preferences for attractive faces of White males, White females, Black females, and infants. Infants viewed pairs of faces rated for attractiveness by adults. Preferences for attractive faces were found for all facial types. (BC)

  20. The superficial temporal fat pad and its ramifications for temporalis muscle construction in facial approximation.

    PubMed

    Stephan, Carl N; Devine, Matthew

    2009-10-30

    The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.

  1. A longitudinal study of facial growth of Southern Chinese in Hong Kong: Comprehensive photogrammetric analyses.

    PubMed

    Wen, Yi Feng; Wong, Hai Ming; McGrath, Colman Patrick

    2017-01-01

    Existing studies on facial growth were mostly cross-sectional in nature and only a limited number of facial measurements were investigated. The purposes of this study were to longitudinally investigate facial growth of Chinese in Hong Kong from 12 through 15 to 18 years of age and to compare the magnitude of growth changes between genders. Standardized frontal and lateral facial photographs were taken from 266 (149 females and 117 males) and 265 (145 females and 120 males) participants, respectively, at all three age levels. Linear and angular measurements, profile inclinations, and proportion indices were recorded. Statistical analyses were performed to investigate growth changes of facial features. Comparisons were made between genders in terms of the magnitude of growth changes from ages 12 to 15, 15 to 18, and 12 to 18 years. For the overall face, all linear measurements increased significantly (p < 0.05) except for height of the lower profile in females (p = 0.069) and width of the face in males (p = 0.648). In both genders, the increase in height of eye fissure was around 10% (p < 0.001). There was significant decrease in nasofrontal angle (p < 0.001) and increase in nasofacial angle (p < 0.001) in both genders and these changes were larger in males. Vermilion-total upper lip height index remained stable in females (p = 0.770) but increased in males (p = 0.020). Nasofrontal angle (effect size: 0.55) and lower vermilion contour index (effect size: 0.59) demonstrated large magnitude of gender difference in the amount of growth changes from 12 to 18 years. Growth changes of facial features and gender differences in the magnitude of facial growth were determined. The findings may benefit different clinical specialties and other nonclinical fields where facial growth are of interest.

  2. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  3. A Single-Center Review of Facial Fractures as the Result of High-Speed Projectile Injuries

    PubMed Central

    Liu, Farrah C.; Halsey, Jordan N.; Hoppe, Ian C.; Ciminello, Frank S.; Lee, Edward S.; Granick, Mark S.

    2018-01-01

    Purpose: Gunshot injuries to the face that result in fractures of the underlying skeleton present a challenge in management. The goal of this study was to evaluate patterns of facial fractures as a result of gunshot injuries and strategies for management. Methods: A retrospective review of facial fractures resulting from gunshot injuries in a level 1 trauma center was performed for the years 2000 to 2012. Data were collected for patient demographics, fracture distribution, concomitant injuries, and surgical management strategies. Results: A total of 190 patients sustained facial fractures from a gunshot injury. The average age was 29.9 years, and 90% were male. Sixteen injuries were self-inflicted. The most common fractures were of the mandible and the orbit. Uncontrolled hemorrhage was noted on presentation in 68 patients; 100 patients were intubated on arrival. The average Glasgow Coma Scale score on arrival was 11.9. Concomitant injuries included skull fracture, intracranial hemorrhage, and intrathoracic injury. Surgical management was required in 89 patients. Nine patients required soft-tissue coverage. Thirty patients expired. Conclusion: Gunshot injuries to the face resulting in fractures of the underlying skeleton have high instances of morbidity and mortality. Life-threatening concomitant injuries can complicate management of facial fractures in this population. PMID:29713397

  4. Comparison of alpha- and beta-hydroxy acid chemical peels in the treatment of mild to moderately severe facial acne vulgaris.

    PubMed

    Kessler, Edward; Flanagan, Katherine; Chia, Christina; Rogers, Cynthia; Glaser, Dee Anna

    2008-01-01

    Chemical peels are used as adjuvants for treatment of facial acne. No well-controlled studies have compared alpha- and beta-hydroxy acid peels in the treatment of mild to moderately severe facial acne. To compare the efficacy of alpha- and beta-hydroxy acid chemical peels in the treatment of mild to moderately severe facial acne vulgaris. Twenty patients were recruited in this split-face, double-blind, randomized, controlled study. An alpha-hydroxy acid (30% glycolic acid) was applied to one-half of the face and a beta-hydroxy acid peel (30% salicylic acid) was applied contralaterally every 2 weeks for a total of six treatments. A blinded evaluator performed quantitative assessment of papules and pustules. Both chemical peels were significantly effective by the second treatment (p<.05) and there were no significant differences in effectiveness between the two peels. At 2 months posttreatment, the salicylic acid peel had sustained effectiveness. More adverse events were reported with the glycolic acid peel after the initial treatment. The glycolic acid and salicylic acid peels were similarly effective. The salicylic acid peel had sustained effectiveness and fewer side effects. Alpha- and beta-hydroxy acid peels both offer successful adjunctive treatment of facial acne vulgaris.

  5. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    PubMed

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Personality and facial morphology: Links to assertiveness and neuroticism in capuchins (Sapajus [Cebus] apella)

    PubMed Central

    Wilson, V.; Lefevre, C. E.; Morton, F. B.; Brosnan, S. F.; Paukner, A.; Bates, T. C.

    2013-01-01

    Personality has important links to health, social status, and life history outcomes (e.g. longevity and reproductive success). Human facial morphology appears to signal aspects of one’s personality to others, raising questions about the evolutionary origins of such associations (e.g. signals of mate quality). Studies in non-human primates may help to achieve this goal: for instance, facial width-to-height ratio (fWHR) in the male face has been associated with dominance not only in humans but also in capuchin monkeys. Here we test the association of personality (assertiveness, openness, attentiveness, neuroticism, and sociability) with fWHR, face width/lower-face height, and lower face/face height ratio in 64 capuchins (Sapajus apella). In a structural model of personality and facial metrics, fWHR was associated with assertiveness, while lower face/face height ratio was associated with neuroticism (erratic vs. stable behaviour) and attentiveness (helpfulness vs. distractibility). Facial morphology thus appears to associate with three personality domains, which may act as a signal of status in capuchins. PMID:24347756

  7. Decoding facial blends of emotion: visual field, attentional and hemispheric biases.

    PubMed

    Ross, Elliott D; Shayya, Luay; Champlain, Amanda; Monnot, Marilee; Prodan, Calin I

    2013-12-01

    Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced. Published by Elsevier Inc.

  8. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    PubMed

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Facial morphology and children's categorization of facial expressions of emotions: a comparison between Asian and Caucasian faces.

    PubMed

    Gosselin, P; Larocque, C

    2000-09-01

    The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.

  10. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  11. Functional connectivity between amygdala and facial regions involved in recognition of facial threat

    PubMed Central

    Harada, Tokiko; Ruffman, Ted; Sadato, Norihiro; Iidaka, Tetsuya

    2013-01-01

    The recognition of threatening faces is important for making social judgments. For example, threatening facial features of defendants could affect the decisions of jurors during a trial. Previous neuroimaging studies using faces of members of the general public have identified a pivotal role of the amygdala in perceiving threat. This functional magnetic resonance imaging study used face photographs of male prisoners who had been convicted of first-degree murder (MUR) as threatening facial stimuli. We compared the subjective ratings of MUR faces with those of control (CON) faces and examined how they were related to brain activation, particularly, the modulation of the functional connectivity between the amygdala and other brain regions. The MUR faces were perceived to be more threatening than the CON faces. The bilateral amygdala was shown to respond to both MUR and CON faces, but subtraction analysis revealed no significant difference between the two. Functional connectivity analysis indicated that the extent of connectivity between the left amygdala and the face-related regions (i.e. the superior temporal sulcus, inferior temporal gyrus and fusiform gyrus) was correlated with the subjective threat rating for the faces. We have demonstrated that the functional connectivity is modulated by vigilance for threatening facial features. PMID:22156740

  12. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Anatomically accurate individual face modeling.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2003-01-01

    This paper presents a new 3D face model of a specific person constructed from the anatomical perspective. By exploiting the laser range data, a 3D facial mesh precisely representing the skin geometry is reconstructed. Based on the geometric facial mesh, we develop a deformable multi-layer skin model. It takes into account the nonlinear stress-strain relationship and dynamically simulates the non-homogenous behavior of the real skin. The face model also incorporates a set of anatomically-motivated facial muscle actuators and underlying skull structure. Lagrangian mechanics governs the facial motion dynamics, dictating the dynamic deformation of facial skin in response to the muscle contraction.

  14. The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set

    PubMed Central

    Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad

    2013-01-01

    Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434

  15. [Slowing down the flow of facial information enhances facial scanning in children with autism spectrum disorders: A pilot eye tracking study].

    PubMed

    Charrier, A; Tardif, C; Gepner, B

    2017-02-01

    Face and gaze avoidance are among the most characteristic and salient symptoms of autism spectrum disorders (ASD). Studies using eye tracking highlighted early and lifelong ASD-specific abnormalities in attention to face such as decreased attention to internal facial features. These specificities could be partly explained by disorders in the perception and integration of rapid and complex information such as that conveyed by facial movements and more broadly by biological and physical environment. Therefore, we wish to test whether slowing down facial dynamics may improve the way children with ASD attend to a face. We used an eye tracking method to examine gaze patterns of children with ASD aged 3 to 8 (n=23) and TD controls (n=29) while viewing the face of a speaker telling a story. The story was divided into 6 sequences that were randomly displayed at 3 different speeds, i.e. a real-time speed (RT), a slow speed (S70=70% of RT speed), a very slow speed (S50=50% of RT speed). S70 and S50 were displayed thanks to software called Logiral™, aimed at slowing down visual and auditory stimuli simultaneously and without tone distortion. The visual scene was divided into four regions of interest (ROI): eyes region; mouth region; whole face region; outside the face region. The total time, number and mean duration of visual fixations on the whole visual scene and the four ROI were measured between and within the two groups. Compared to TD children, children with ASD spent significantly less time attending to the visual scenes and, when they looked at the scene, they spent less time scanning the speaker's face in general and her mouth in particular, and more time looking outside facial area. Within the ASD group mean duration of fixation increased on the whole scene and particularly on the mouth area, in R50 compared to RT. Children with mild autism spent more time looking at the face than the two other groups of ASD children, and spent more time attending to the face and mouth as well as longer mean duration of visual fixation on mouth and eyes, at slow speeds (S50 and/or S70) than at RT one. Slowing down facial dynamics enhances looking time on face, and particularly on mouth and/or eyes, in a group of 23 children with ASD and particularly in a small subgroup with mild autism. Given the crucial role of reading the eyes for emotional processing and that of lip-reading for language processing, our present result and other converging ones could pave the way for novel socio-emotional and verbal rehabilitation methods for autistic population. Further studies should investigate whether increased attention to face and particularly eyes and mouth is correlated to emotional/social and/or verbal/language improvements. Copyright © 2016 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  16. Holistic Processing of Static and Moving Faces

    ERIC Educational Resources Information Center

    Zhao, Mintao; Bülthoff, Isabelle

    2017-01-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…

  17. The Impact of Face Skin Tone vs. Face Symmetry on Perceived Facial Attractiveness.

    PubMed

    Vera Cruz, Germano

    2018-01-01

    The purpose of this study was to assess and compare the relative contribution of skin tone and symmetry on judgment of attractiveness regarding female faces. Two hundred and fifteen Mozambican adults were presented with a set of faces, and instructed to rate their degree of attractiveness along a continuous scale. Chi-square, factorial weight analyses and ANOVA were used to analyze the data. Face skin tone had a significant impact on the participants' attractiveness judgment of target faces. However, the target face skin tone contribution to the participants' attractiveness judgment (5% of the total variance) was much weaker than the contribution of the target face symmetry (85% of the total variance). These results imply that skin bleaching, common among Black people across sub-Saharan African countries, is not only dangerous to the health of those who practice it, but it is unlikely to make them appear much more attractive.

  18. Top-down guidance in visual search for facial expressions.

    PubMed

    Hahn, Sowon; Gronlund, Scott D

    2007-02-01

    Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1 participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.

  19. Anthropometric Analysis of the Face.

    PubMed

    Zacharopoulos, Georgios V; Manios, Andreas; Kau, Chung H; Velagrakis, George; Tzanakakis, George N; de Bree, Eelco

    2016-01-01

    Facial anthropometric analysis is essential for planning cosmetic and reconstructive facial surgery, but has not been available in detail for modern Greeks. In this study, multiple measurements of the face were performed on young Greek males and females to provide a complete facial anthropometric profile of this population and to compare its facial morphology with that of North American Caucasians. Thirty-one direct facial anthropometric measurements were obtained from 152 Greek students. Moreover, the prevalence of the various face types was determined. The resulting data were compared with those published regarding North American Caucasians. A complete set of average anthropometric data was obtained for each sex. Greek males, when compared to Greek females, were found to have statistically significantly longer foreheads as well as greater values in morphologic face height, mandible width, maxillary surface arc distance, and mandibular surface arc distance. In both sexes, the most common face types were mesoprosop, leptoprosop, and hyperleptoprosop. Greek males had significantly wider faces and mandibles than the North American Caucasian males, whereas Greek females had only significantly wider mandibles than their North American counterparts. Differences of statistical significance were noted in the head and face regions among sexes as well as among Greek and North American Caucasians. With the establishment of facial norms for Greek adults, this study contributes to the preoperative planning as well as postoperative evaluation of Greek patients that are, respectively, scheduled for or are to be subjected to facial reconstructive and aesthetic surgery.

  20. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    PubMed Central

    Eack, Shaun M.; MAZEFSKY, CARLA A.; Minshew, Nancy J.

    2014-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with ASD and 30 age- and gender-matched volunteers without ASD to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairments in autism. Results revealed that difficulty distinguishing emotional from neutral facial expressions characterized much of the emotion perception impairments exhibited by participants with ASD. In particular, adults with ASD uniquely misinterpreted happy faces as neutral, and were significantly more likely than typical volunteers to attribute negative valence to non-emotional faces. The over-attribution of emotions to neutral faces was significantly related to greater communication and emotional intelligence impairments in individuals with ASD. These findings suggest a potential negative bias toward the interpretation of facial expressions and may have implications for interventions designed to remediate emotion perception in ASD. PMID:24535689

  1. More than mere mimicry? The influence of emotion on rapid facial reactions to faces.

    PubMed

    Moody, Eric J; McIntosh, Daniel N; Mann, Laura J; Weisser, Kimberly R

    2007-05-01

    Within a second of seeing an emotional facial expression, people typically match that expression. These rapid facial reactions (RFRs), often termed mimicry, are implicated in emotional contagion, social perception, and embodied affect, yet ambiguity remains regarding the mechanism(s) involved. Two studies evaluated whether RFRs to faces are solely nonaffective motor responses or whether emotional processes are involved. Brow (corrugator, related to anger) and forehead (frontalis, related to fear) activity were recorded using facial electromyography (EMG) while undergraduates in two conditions (fear induction vs. neutral) viewed fear, anger, and neutral facial expressions. As predicted, fear induction increased fear expressions to angry faces within 1000 ms of exposure, demonstrating an emotional component of RFRs. This did not merely reflect increased fear from the induction, because responses to neutral faces were unaffected. Considering RFRs to be merely nonaffective automatic reactions is inaccurate. RFRs are not purely motor mimicry; emotion influences early facial responses to faces. The relevance of these data to emotional contagion, autism, and the mirror system-based perspectives on imitation is discussed.

  2. Misinterpretation of facial expressions of emotion in verbal adults with autism spectrum disorder.

    PubMed

    Eack, Shaun M; Mazefsky, Carla A; Minshew, Nancy J

    2015-04-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum disorder and 30 age- and gender-matched volunteers without autism spectrum disorder to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairments in autism. Results revealed that difficulty distinguishing emotional from neutral facial expressions characterized much of the emotion perception impairments exhibited by participants with autism spectrum disorder. In particular, adults with autism spectrum disorder uniquely misinterpreted happy faces as neutral, and were significantly more likely than typical volunteers to attribute negative valence to nonemotional faces. The over-attribution of emotions to neutral faces was significantly related to greater communication and emotional intelligence impairments in individuals with autism spectrum disorder. These findings suggest a potential negative bias toward the interpretation of facial expressions and may have implications for interventions designed to remediate emotion perception in autism spectrum disorder. © The Author(s) 2014.

  3. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    PubMed

    Fuentes, Christina T; Runa, Catarina; Blanco, Xenxo Alvarez; Orvalho, Verónica; Haggard, Patrick

    2013-01-01

    Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  4. Combination of Face Regions in Forensic Scenarios.

    PubMed

    Tome, Pedro; Fierrez, Julian; Vera-Rodriguez, Ruben; Ortega-Garcia, Javier

    2015-07-01

    This article presents an experimental analysis of the combination of different regions of the human face on various forensic scenarios to generate scientific knowledge useful for the forensic experts. Three scenarios of interest at different distances are considered comparing mugshot and CCTV face images using MORPH and SC face databases. One of the main findings is that inner facial regions combine better in mugshot and close CCTV scenarios and outer facial regions combine better in far CCTV scenarios. This means, that depending of the acquisition distance, the discriminative power of the facial regions change, having in some cases better performance than the full face. This effect can be exploited by considering the fusion of facial regions which results in a very significant improvement of the discriminative performance compared to just using the full face. © 2015 American Academy of Forensic Sciences.

  5. Rigid Facial Motion Influences Featural, But Not Holistic, Face Processing

    PubMed Central

    Xiao, Naiqi; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2012-01-01

    We report three experiments in which we investigated the effect of rigid facial motion on face processing. Specifically, we used the face composite effect to examine whether rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1, 2, and 3, participants were first familiarized with dynamic displays in which a target face turned from one side to another; then at test, participants judged whether the top half of a composite face (the top half of the target face aligned or misaligned with the bottom half of a foil face) belonged to the target face. We compared performance in the dynamic condition to various static control conditions in Experiments 1, 2, and 3, which differed from each other in terms of the display order of the multiple static images or the inter stimulus interval (ISI) between the images. We found that the size of the face composite effect in the dynamic condition was significantly smaller than that in the static conditions. In other words, the dynamic face display influenced participants to process the target faces in a part-based manner and consequently their recognition of the upper portion of the composite face at test became less interfered with by the aligned lower part of the foil face. The findings from the present experiments provide the strongest evidence to date to suggest that the rigid facial motion mainly influences facial featural, but not holistic, processing. PMID:22342561

  6. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  7. Chimpanzees (Pan troglodytes) Produce the Same Types of 'Laugh Faces' when They Emit Laughter and when They Are Silent.

    PubMed

    Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A

    2015-01-01

    The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates' open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans.

  8. Importance of the brow in facial expressiveness during human communication.

    PubMed

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  9. Beauty hinders attention switch in change detection: the role of facial attractiveness and distinctiveness.

    PubMed

    Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo

    2012-01-01

    Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.

  10. Blend Shape Interpolation and FACS for Realistic Avatar

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila

    2015-03-01

    The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.

  11. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  12. Subliminal Face Emotion Processing: A Comparison of Fearful and Disgusted Faces

    PubMed Central

    Khalid, Shah; Ansorge, Ulrich

    2017-01-01

    Prior research has provided evidence for (1) subcortical processing of subliminal facial expressions of emotion and (2) for the emotion-specificity of these processes. Here, we investigated if this is also true for the processing of the subliminal facial display of disgust. In Experiment 1, we used differently filtered masked prime faces portraying emotionally neutral or disgusted expressions presented prior to clearly visible target faces to test if the masked primes exerted an influence on target processing nonetheless. Whereas we found evidence for subliminal face congruence or priming effects, in particular, reverse priming by low spatial frequencies disgusted face primes, we did not find any support for a subcortical origin of the effect. In Experiment 2, we compared the influence of subliminal disgusted faces with that of subliminal fearful faces and demonstrated a behavioral performance difference between the two, pointing to an emotion-specific processing of the disgusted facial expressions. In both experiments, we also tested for the dependence of the subliminal emotional face processing on spatial attention – with mixed results, suggesting an attention-independence in Experiment 1 but not in Experiment 2 –, and we found perfect masking of the face primes – that is, proof of the subliminality of the prime faces. Based on our findings, we speculate that subliminal facial expressions of disgust could afford easy avoidance of these faces. This could be a unique effect of disgusted faces as compared to other emotional facial displays, at least under the conditions studied here. PMID:28680413

  13. Subliminal Face Emotion Processing: A Comparison of Fearful and Disgusted Faces.

    PubMed

    Khalid, Shah; Ansorge, Ulrich

    2017-01-01

    Prior research has provided evidence for (1) subcortical processing of subliminal facial expressions of emotion and (2) for the emotion-specificity of these processes. Here, we investigated if this is also true for the processing of the subliminal facial display of disgust. In Experiment 1, we used differently filtered masked prime faces portraying emotionally neutral or disgusted expressions presented prior to clearly visible target faces to test if the masked primes exerted an influence on target processing nonetheless. Whereas we found evidence for subliminal face congruence or priming effects, in particular, reverse priming by low spatial frequencies disgusted face primes, we did not find any support for a subcortical origin of the effect. In Experiment 2, we compared the influence of subliminal disgusted faces with that of subliminal fearful faces and demonstrated a behavioral performance difference between the two, pointing to an emotion-specific processing of the disgusted facial expressions. In both experiments, we also tested for the dependence of the subliminal emotional face processing on spatial attention - with mixed results, suggesting an attention-independence in Experiment 1 but not in Experiment 2 -, and we found perfect masking of the face primes - that is, proof of the subliminality of the prime faces. Based on our findings, we speculate that subliminal facial expressions of disgust could afford easy avoidance of these faces. This could be a unique effect of disgusted faces as compared to other emotional facial displays, at least under the conditions studied here.

  14. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness.

    PubMed

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness.

  15. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.

  16. Head and face anthropometry of adult U.S. civilians.

    DOT National Transportation Integrated Search

    1993-07-01

    This report presents a total of 17 traditional and 5 new head and facial dimensions from a random composite U.S. female and male civilian population measured over a period of 30 years. The 5 new measurements, identified to describe specific anatomica...

  17. Eruptive Facial Postinflammatory Lentigo: Clinical and Dermatoscopic Features.

    PubMed

    Cabrera, Raul; Puig, Susana; Larrondo, Jorge; Castro, Alex; Valenzuela, Karen; Sabatini, Natalia

    2016-11-01

    The face has not been considered a common site of fixed drug eruption, and the authors lack dermatoscopic studies of this condition on the subject. The authors sought to characterize clinical and dermatoscopic features of 8 cases of an eruptive facial postinflammatory lentigo. The authors conducted a retrospective review of 8 cases with similar clinical and dermatoscopic findings seen from 2 medical centers in 2 countries during 2010-2014. A total of 8 patients (2 males and 6 females) with ages that ranged from 34 to 62 years (mean: 48) presented an abrupt onset of a single facial brown-pink macule, generally asymmetrical, with an average size of 1.9 cm. after ingestion of a nonsteroidal antiinflammatory drugs that lasted for several months. Dermatoscopy mainly showed a pseudonetwork or uniform areas of brown pigmentation, brown or blue-gray dots, red dots and/or telangiectatic vessels. In the epidermis, histopathology showed a mild hydropic degeneration and focal melanin hyperpigmentation. Melanin can be found freely in the dermis or laden in macrophages along with a mild perivascular mononuclear infiltrate. The authors describe eruptive facial postinflammatory lentigo as a new variant of a fixed drug eruption on the face.

  18. A Proposal of a Communication Medium between Patients with Facial Disorder and the Doctors

    NASA Astrophysics Data System (ADS)

    Ito, Kyoko; Kurose, Hiroyuki; Takami, Ai; Shirai, Masayuki; Shimizu, Ryosuke; Nishida, Shogo

    There are diseases with the disorder in the face although human's face is an important body site with the social role. In this study, it is focused on “patient's facial expression” as a medium supporting communications between the patient with facial disorder and the doctor toward the satisfaction improvement to the patient's treatment on the medical treatment site. And, “expression to be expressed” and “difference between the expression actually expressed aiming at the expression to be expressed and the expression to be expressed” were selected as information transmitted through patient's expression. The design and development of an interface with the functions of an expression setting and an expression confirmation were carried out as an environmental setting for the patient to express selected information. Fourteen dentists in total who had the treatment experience of the facial disorder evaluated the possibility of the proposed interface as utility and a communication tool in the medical treatment site. The possibility of leading to the expression of the expectation for the patient's treatment was suggested as the results of the experiment, and concrete challenge points and the method for use in the medical treatment field were proposed.

  19. The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression.

    PubMed

    Holmes, Amanda; Winston, Joel S; Eimer, Martin

    2005-10-01

    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.

  20. Posed versus spontaneous facial expressions are modulated by opposite cerebral hemispheres.

    PubMed

    Ross, Elliott D; Pulusu, Vinay K

    2013-05-01

    Clinical research has indicated that the left face is more expressive than the right face, suggesting that modulation of facial expressions is lateralized to the right hemisphere. The findings, however, are controversial because the results explain, on average, approximately 4% of the data variance. Using high-speed videography, we sought to determine if movement-onset asymmetry was a more powerful research paradigm than terminal movement asymmetry. The results were very robust, explaining up to 70% of the data variance. Posed expressions began overwhelmingly on the right face whereas spontaneous expressions began overwhelmingly on the left face. This dichotomy was most robust for upper facial expressions. In addition, movement-onset asymmetries did not predict terminal movement asymmetries, which were not significantly lateralized. The results support recent neuroanatomic observations that upper versus lower facial movements have different forebrain motor representations and recent behavioral constructs that posed versus spontaneous facial expressions are modulated preferentially by opposite cerebral hemispheres and that spontaneous facial expressions are graded rather than non-graded movements. Published by Elsevier Ltd.

  1. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    NASA Astrophysics Data System (ADS)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  2. Facial recognition performance of female inmates as a result of sexual assault history.

    PubMed

    Islam-Zwart, Kayleen A; Heath, Nicole M; Vik, Peter W

    2005-06-01

    This study examined the effect of sexual assault history on facial recognition performance. Gender of facial stimuli and posttraumatic stress disorder (PTSD) symptoms also were expected to influence performance. Fifty-six female inmates completed an interview and the Wechsler Memory Scale-Third Edition Faces I and Faces II subtests (Wechsler, 1997). Women with a sexual assault exhibited better immediate and delayed facial recognition skills than those with no assault history. There were no differences in performance based on the gender of faces or PTSD diagnosis. Immediate facial recognition was correlated with report of PTSD symptoms. Findings provide greater insight into women's reactions to, and the uniqueness of, the trauma of sexual victimization.

  3. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia.

    PubMed

    Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto

    2017-08-01

    Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.

  4. Correlated preferences for facial masculinity and ideal or actual partner's masculinity

    PubMed Central

    DeBruine, Lisa M; Jones, Benedict C; Little, Anthony C; Boothroyd, Lynda G; Perrett, David I; Penton-Voak, Ian S; Cooper, Philip A; Penke, Lars; Feinberg, David R; Tiddeman, Bernard P

    2006-01-01

    Studies of women's preferences for male faces have variously reported preferences for masculine faces, preferences for feminine faces and no effect of masculinity–femininity on male facial attractiveness. It has been suggested that these apparently inconsistent findings are, at least partly, due to differences in the methods used to manipulate the masculinity of face images or individual differences in attraction to facial cues associated with youth. Here, however, we show that women's preferences for masculinity manipulated in male faces using techniques similar to the three most widely used methods are positively inter-related. We also show that women's preferences for masculine male faces are positively related to ratings of the masculinity of their actual partner and their ideal partner. Correlations with partner masculinity were independent of real and ideal partner age, which were not associated with facial masculinity preference. Collectively, these findings suggest that variability among studies in their findings for women's masculinity preferences reflects individual differences in attraction to masculinity rather than differences in the methods used to manufacture stimuli, and are important for the interpretation of previous and future studies of facial masculinity. PMID:16777723

  5. Atypical face shape and genomic structural variants in epilepsy

    PubMed Central

    Chinthapalli, Krishna; Bartolini, Emanuele; Novy, Jan; Suttie, Michael; Marini, Carla; Falchi, Melania; Fox, Zoe; Clayton, Lisa M. S.; Sander, Josemir W.; Guerrini, Renzo; Depondt, Chantal; Hennekam, Raoul; Hammond, Peter

    2012-01-01

    Many pathogenic structural variants of the human genome are known to cause facial dysmorphism. During the past decade, pathogenic structural variants have also been found to be an important class of genetic risk factor for epilepsy. In other fields, face shape has been assessed objectively using 3D stereophotogrammetry and dense surface models. We hypothesized that computer-based analysis of 3D face images would detect subtle facial abnormality in people with epilepsy who carry pathogenic structural variants as determined by chromosome microarray. In 118 children and adults attending three European epilepsy clinics, we used an objective measure called Face Shape Difference to show that those with pathogenic structural variants have a significantly more atypical face shape than those without such variants. This is true when analysing the whole face, or the periorbital region or the perinasal region alone. We then tested the predictive accuracy of our measure in a second group of 63 patients. Using a minimum threshold to detect face shape abnormalities with pathogenic structural variants, we found high sensitivity (4/5, 80% for whole face; 3/5, 60% for periorbital and perinasal regions) and specificity (45/58, 78% for whole face and perinasal regions; 40/58, 69% for periorbital region). We show that the results do not seem to be affected by facial injury, facial expression, intellectual disability, drug history or demographic differences. Finally, we use bioinformatics tools to explore relationships between facial shape and gene expression within the developing forebrain. Stereophotogrammetry and dense surface models are powerful, objective, non-contact methods of detecting relevant face shape abnormalities. We demonstrate that they are useful in identifying atypical face shape in adults or children with structural variants, and they may give insights into the molecular genetics of facial development. PMID:22975390

  6. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  7. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    PubMed

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  8. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    PubMed Central

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335

  9. Rapid facial reactions to emotional facial expressions in typically developing children and children with autism spectrum disorder.

    PubMed

    Beall, Paula M; Moody, Eric J; McIntosh, Daniel N; Hepburn, Susan L; Reed, Catherine L

    2008-11-01

    Typical adults mimic facial expressions within 1000 ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study evaluated mechanisms underlying RFRs during childhood and examined possible impairment in children with ASD. Experiment 1 found RFRs to happy and angry faces (not fear faces) in 15 typically developing children from 7 to 12 years of age. RFRs of fear (not anger) in response to angry faces indicated an emotional mechanism. In 11 children (8-13 years of age) with ASD, Experiment 2 found undifferentiated RFRs to fear expressions and no consistent RFRs to happy or angry faces. However, as children with ASD aged, matching RFRs to happy faces increased significantly, suggesting the development of processes underlying matching RFRs during this period in ASD.

  10. Mere social categorization modulates identification of facial expressions of emotion.

    PubMed

    Young, Steven G; Hugenberg, Kurt

    2010-12-01

    The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  11. Morphological quantitative criteria and aesthetic evaluation of eight female Han face types.

    PubMed

    Zhao, Qiming; Zhou, Rongrong; Zhang, XuDong; Sun, Huafeng; Lu, Xin; Xia, Dongsheng; Song, Mingli; Liang, Yang

    2013-04-01

    Human facial aesthetics relies on the classification of facial features and standards of attractiveness. However, there are no widely accepted quantitative criteria for facial attractiveness, particularly for Chinese Han faces. Establishing quantitative standards of attractiveness for facial landmarks within facial types is important for planning outcomes in cosmetic plastic surgery. The aim of this study was to determine quantitatively the criteria for attractiveness of eight female Chinese Han facial types. A photographic database of young Chinese Han women's faces was created. Photographed faces (450) were classified based on eight established types and scored for attractiveness. Measurements taken at seven standard facial landmarks and their relative proportions were analyzed for correlations to attractiveness scores. Attractive faces of each type were averaged via an image-morphing algorithm to generate synthetic facial types. Results were compared with the neoclassical ideal and data for Caucasians. Morphological proportions corresponding to the highest attractiveness scores for Chinese Han women differed from the neoclassical ideal. In our population of young, normal, healthy Han women, high attractiveness ratings were given to those with greater temporal width and pogonion-gonion distance, and smaller bizygomatic and bigonial widths. As attractiveness scores increased, the ratio of the temporal to bizygomatic widths increased, and the ratio of the distance between the pogonion and gonion to the bizygomatic width also increased slightly. Among the facial types, the oval and inverted triangular were the most attractive. The neoclassical ideal of attractiveness does not apply to Han faces. However, the proportion of faces considered attractive in this population was similar to that of Caucasian populations. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  12. The importance of skin color and facial structure in perceiving and remembering others: an electrophysiological study.

    PubMed

    Brebner, Joanne L; Krigolson, Olav; Handy, Todd C; Quadflieg, Susanne; Turk, David J

    2011-05-04

    The own-race bias (ORB) is a well-documented recognition advantage for own-race (OR) over cross-race (CR) faces, the origin of which remains unclear. In the current study, event-related potentials (ERPs) were recorded while Caucasian participants age-categorized Black and White faces which were digitally altered to display either a race congruent or incongruent facial structure. The results of a subsequent surprise memory test indicated that regardless of facial structure participants recognized White faces better than Black faces. Additional analyses revealed that temporally-early ERP components associated with face-specific perceptual processing (N170) and the individuation of facial exemplars (N250) were selectively sensitive to skin color. In addition, the N200 (a component that has been linked to increased attention and depth of encoding afforded to in-group and OR faces) was modulated by color and structure, and correlated with subsequent memory performance. However, the LPP component associated with the cognitive evaluation of perceptual input was influenced by racial differences in facial structure alone. These findings suggest that racial differences in skin color and facial structure are detected during the encoding of unfamiliar faces, and that the categorization of conspecifics as members of our social in-group on the basis of their skin color may be a determining factor in our ability to subsequently remember them. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Testosterone-mediated sex differences in the face shape during adolescence: subjective impressions and objective features.

    PubMed

    Marečková, Klára; Weinbrand, Zohar; Chakravarty, M Mallar; Lawrence, Claire; Aleong, Rosanne; Leonard, Gabriel; Perron, Michel; Pike, G Bruce; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2011-11-01

    Sex identification of a face is essential for social cognition. Still, perceptual cues indicating the sex of a face, and mechanisms underlying their development, remain poorly understood. Previously, our group described objective age- and sex-related differences in faces of healthy male and female adolescents (12-18 years of age), as derived from magnetic resonance images (MRIs) of the adolescents' heads. In this study, we presented these adolescent faces to 60 female raters to determine which facial features most reliably predicted subjective sex identification. Identification accuracy correlated highly with specific MRI-derived facial features (e.g. broader forehead, chin, jaw, and nose). Facial features that most reliably cued male identity were associated with plasma levels of testosterone (above and beyond age). Perceptible sex differences in face shape are thus associated with specific facial features whose emergence may be, in part, driven by testosterone. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    PubMed

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  15. Neural responses to facial expression and face identity in the monkey amygdala.

    PubMed

    Gothard, K M; Battaglia, F P; Erickson, C A; Spitler, K M; Amaral, D G

    2007-02-01

    The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.

  16. Nine-year-old children use norm-based coding to visually represent facial expression.

    PubMed

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. What the Human Brain Likes About Facial Motion

    PubMed Central

    Schultz, Johannes; Brockhaus, Matthias; Bülthoff, Heinrich H.; Pilz, Karin S.

    2013-01-01

    Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces. PMID:22535907

  18. Neural mechanisms underlying the effects of face-based affective signals on memory for faces: a tentative model

    PubMed Central

    Tsukiura, Takashi

    2012-01-01

    In our daily lives, we form some impressions of other people. Although those impressions are affected by many factors, face-based affective signals such as facial expression, facial attractiveness, or trustworthiness are important. Previous psychological studies have demonstrated the impact of facial impressions on remembering other people, but little is known about the neural mechanisms underlying this psychological process. The purpose of this article is to review recent functional MRI (fMRI) studies to investigate the effects of face-based affective signals including facial expression, facial attractiveness, and trustworthiness on memory for faces, and to propose a tentative concept for understanding this affective-cognitive interaction. On the basis of the aforementioned research, three brain regions are potentially involved in the processing of face-based affective signals. The first candidate is the amygdala, where activity is generally modulated by both affectively positive and negative signals from faces. Activity in the orbitofrontal cortex (OFC), as the second candidate, increases as a function of perceived positive signals from faces; whereas activity in the insular cortex, as the third candidate, reflects a function of face-based negative signals. In addition, neuroscientific studies have reported that the three regions are functionally connected to the memory-related hippocampal regions. These findings suggest that the effects of face-based affective signals on memory for faces could be modulated by interactions between the regions associated with the processing of face-based affective signals and the hippocampus as a memory-related region. PMID:22837740

  19. The effects of facial color and inversion on the N170 event-related potential (ERP) component.

    PubMed

    Minami, T; Nakajima, K; Changvisommid, L; Nakauchi, S

    2015-12-17

    Faces are important for social interaction because much can be perceived from facial details, including a person's race, age, and mood. Recent studies have shown that both configural (e.g. face shape and inversion) and surface information (e.g. surface color and reflectance properties) are important for face perception. Therefore, the present study examined the effects of facial color and inverted face properties on event-related potential (ERP) responses, particularly the N170 component. Stimuli consisted of natural and bluish-colored faces. Faces were presented in both upright and upside down orientations. An ANOVA was used to analyze N170 amplitudes and verify the effects of the main independent variables. Analysis of N170 amplitude revealed the significant interactions between stimulus orientation and color. Subsequent analysis indicated that N170 was larger for bluish-colored faces than natural-colored faces, and N170 to natural-colored faces was larger in response to inverted stimulus as compared to upright stimulus. Additionally, a multivariate pattern analysis (MVPA) investigated face-processing dynamics without any prior assumptions. Results distinguished, above chance, both facial color and orientation from single-trial electroencephalogram (EEG) signals. Decoding performance for color classification of inverted faces was significantly diminished as compared to an upright orientation. This suggests that processing orientation is predominant over facial color. Taken together, the present findings elucidate the temporal and spatial distribution of orientation and color processing during face processing. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. The effect of skin surface topography and skin colouration cues on perception of male facial age, health and attractiveness.

    PubMed

    Fink, B; Matts, P J; Brauckmann, C; Gundlach, S

    2018-04-01

    Previous studies investigating the effects of skin surface topography and colouration cues on the perception of female faces reported a differential weighting for the perception of skin topography and colour evenness, where topography was a stronger visual cue for the perception of age, whereas skin colour evenness was a stronger visual cue for the perception of health. We extend these findings in a study of the effect of skin surface topography and colour evenness cues on the perceptions of facial age, health and attractiveness in males. Facial images of six men (aged 40 to 70 years), selected for co-expression of lines/wrinkles and discolouration, were manipulated digitally to create eight stimuli, namely, separate removal of these two features (a) on the forehead, (b) in the periorbital area, (c) on the cheeks and (d) across the entire face. Omnibus (within-face) pairwise combinations, including the original (unmodified) face, were presented to a total of 240 male and female judges, who selected the face they considered younger, healthier and more attractive. Significant effects were detected for facial image choice, in response to skin feature manipulation. The combined removal of skin surface topography resulted in younger age perception compared with that seen with the removal of skin colouration cues, whereas the opposite pattern was found for health preference. No difference was detected for the perception of attractiveness. These perceptual effects were seen particularly on the forehead and cheeks. Removing skin topography cues (but not discolouration) in the periorbital area resulted in higher preferences for all three attributes. Skin surface topography and colouration cues affect the perception of age, health and attractiveness in men's faces. The combined removal of these features on the forehead, cheeks and in the periorbital area results in the most positive assessments. © 2018 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  1. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  2. Spatially generalizable representations of facial expressions: Decoding across partial face samples.

    PubMed

    Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W

    2018-04-01

    A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    PubMed

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  4. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  5. [A text-book case of tropical facial elephantiasis].

    PubMed

    Dilu, N-J; Sokolo, R

    2007-02-01

    Tropical facial elephantiasis is a nosological entity which can arise from various underlying causes: von Recklinghausen neurofibromatosis, lymphatic and cutaneodermal filarioses, deep mycosis. We report an exceptional case of tropical facial elephantiasis caused by onchocercosis and entomophtoromycosis (rhinophycomycosis). The patient's facial morphology was noted "hippopotamus-face" or "dog-face". Onchocercosis and entomophtoromycosis are two diseases known to cause facial elephantiasis. We have not however been able to find any case report in the literature of co-morbidity nor any information on factors predictive of concomitant occurrence.

  6. Full-face motorcycle helmet protection from facial impacts: an investigation using THOR dummy impacts and SIMon finite element head model.

    PubMed

    Whyte, Thomas; Gibson, Tom; Eager, David; Milthorpe, Bruce

    2017-06-01

    Facial impacts are both common and injurious for helmeted motorcyclists who crash; however, there is no facial impact requirement in major motorcycle helmet standards. This study examined the effect of full-face motorcycle helmet protection on brain injury risk in facial impacts using a test device with biofidelic head and neck motion. A preliminary investigation of energy absorbing foam in the helmet chin bar was carried out. Flat-faced rigid pendulum impacts were performed on a THOR dummy in an unprotected (no helmet) and protected mode (two full-face helmet conditions). The head responses of the dummy were input into the simulated injury monitor finite element head model to analyse the risk of brain injury in these impacts. Full-face helmet protection provides a significant reduction in brain injury risk in facial impacts at increasing impact speeds compared with an unprotected rider (p<0.05). The effect of low-density crushable foam added to the chin bar could not be distinguished from an unpadded chin bar impact. Despite the lack of an impact attenuation requirement for the face, full-face helmets do provide a reduction in head injury risk to the wearer in facial impacts. The specific helmet design factors that influence head injury risk in facial impacts need further investigation if improved protection for helmeted motorcyclists is to be achieved. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  8. What Facial Appearance Reveals Over Time: When Perceived Expressions in Neutral Faces Reveal Stable Emotion Dispositions

    PubMed Central

    Adams, Reginald B.; Garrido, Carlos O.; Albohn, Daniel N.; Hess, Ursula; Kleck, Robert E.

    2016-01-01

    It might seem a reasonable assumption that when we are not actively using our faces to express ourselves (i.e., when we display nonexpressive, or neutral faces), those around us will not be able to read our emotions. Herein, using a variety of expression-related ratings, we examined whether age-related changes in the face can accurately reveal one’s innermost affective dispositions. In each study, we found that expressive ratings of neutral facial displays predicted self-reported positive/negative dispositional affect, but only for elderly women, and only for positive affect. These findings meaningfully replicate and extend earlier work examining age-related emotion cues in the face of elderly women (Malatesta et al., 1987a). We discuss these findings in light of evidence that women are expected to, and do, smile more than men, and that the quality of their smiles predicts their life satisfaction. Although ratings of old male faces did not significantly predict self-reported affective dispositions, the trend was similar to that found for old female faces. A plausible explanation for this gender difference is that in the process of attenuating emotional expressions over their lifetimes, old men reveal less evidence of their total emotional experiences in their faces than do old women. PMID:27445944

  9. Chimpanzees (Pan troglodytes) Produce the Same Types of ‘Laugh Faces’ when They Emit Laughter and when They Are Silent

    PubMed Central

    Davila-Ross, Marina; Jesus, Goncalo; Osborne, Jade; Bard, Kim A.

    2015-01-01

    The ability to flexibly produce facial expressions and vocalizations has a strong impact on the way humans communicate, as it promotes more explicit and versatile forms of communication. Whereas facial expressions and vocalizations are unarguably closely linked in primates, the extent to which these expressions can be produced independently in nonhuman primates is unknown. The present work, thus, examined if chimpanzees produce the same types of facial expressions with and without accompanying vocalizations, as do humans. Forty-six chimpanzees (Pan troglodytes) were video-recorded during spontaneous play with conspecifics at the Chimfunshi Wildlife Orphanage. ChimpFACS was applied, a standardized coding system to measure chimpanzee facial movements, based on FACS developed for humans. Data showed that the chimpanzees produced the same 14 configurations of open-mouth faces when laugh sounds were present and when they were absent. Chimpanzees, thus, produce these facial expressions flexibly without being morphologically constrained by the accompanying vocalizations. Furthermore, the data indicated that the facial expression plus vocalization and the facial expression alone were used differently in social play, i.e., when in physical contact with the playmates and when matching the playmates’ open-mouth faces. These findings provide empirical evidence that chimpanzees produce distinctive facial expressions independently from a vocalization, and that their multimodal use affects communicative meaning, important traits for a more explicit and versatile way of communication. As it is still uncertain how human laugh faces evolved, the ChimpFACS data were also used to empirically examine the evolutionary relation between open-mouth faces with laugh sounds of chimpanzees and laugh faces of humans. The ChimpFACS results revealed that laugh faces of humans must have gradually emerged from laughing open-mouth faces of ancestral apes. This work examines the main evolutionary changes of laugh faces since the last common ancestor of chimpanzees and humans. PMID:26061420

  10. Emotional face processing and flat affect in schizophrenia: functional and structural neural correlates.

    PubMed

    Lepage, M; Sergerie, K; Benoit, A; Czechowska, Y; Dickie, E; Armony, J L

    2011-09-01

    There is a general consensus in the literature that schizophrenia causes difficulties with facial emotion perception and discrimination. Functional brain imaging studies have observed reduced limbic activity during facial emotion perception but few studies have examined the relation to flat affect severity. A total of 26 people with schizophrenia and 26 healthy controls took part in this event-related functional magnetic resonance imaging study. Sad, happy and neutral faces were presented in a pseudo-random order and participants indicated the gender of the face presented. Manual segmentation of the amygdala was performed on a structural T1 image. Both the schizophrenia group and the healthy control group rated the emotional valence of facial expressions similarly. Both groups exhibited increased brain activity during the perception of emotional faces relative to neutral ones in multiple brain regions, including multiple prefrontal regions bilaterally, the right amygdala, right cingulate cortex and cuneus. Group comparisons, however, revealed increased activity in the healthy group in the anterior cingulate, right parahippocampal gyrus and multiple visual areas. In schizophrenia, the severity of flat affect correlated significantly with neural activity in several brain areas including the amygdala and parahippocampal region bilaterally. These results suggest that many of the brain regions involved in emotional face perception, including the amygdala, are equally recruited in both schizophrenia and controls, but flat affect can also moderate activity in some other brain regions, notably in the left amygdala and parahippocampal gyrus bilaterally. There were no significant group differences in the volume of the amygdala.

  11. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  12. Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    PubMed Central

    Rigoulot, Simon; Pell, Marc D.

    2012-01-01

    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454

  13. Brain Responses to Dynamic Facial Expressions: A Normative Meta-Analysis.

    PubMed

    Zinchenko, Oksana; Yaple, Zachary A; Arsalidou, Marie

    2018-01-01

    Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI) studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos) are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.

  14. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  15. Men's facial masculinity: when (body) size matters.

    PubMed

    Holzleitner, Iris J; Hunter, David W; Tiddeman, Bernard P; Seck, Alassane; Re, Daniel E; Perrett, David I

    2014-01-01

    Recent studies suggest that judgments of facial masculinity reflect more than sexually dimorphic shape. Here, we investigated whether the perception of masculinity is influenced by facial cues to body height and weight. We used the average differences in three-dimensional face shape of forty men and forty women to compute a morphological masculinity score, and derived analogous measures for facial correlates of height and weight based on the average face shape of short and tall, and light and heavy men. We found that facial cues to body height and weight had substantial and independent effects on the perception of masculinity. Our findings suggest that men are perceived as more masculine if they appear taller and heavier, independent of how much their face shape differs from women's. We describe a simple method to quantify how body traits are reflected in the face and to define the physical basis of psychological attributions.

  16. Influence of make-up on facial recognition.

    PubMed

    Ueda, Sayako; Koyama, Takamasa

    2010-01-01

    Make-up may enhance or disguise facial characteristics. The influence of wearing make-up on facial recognition could be of two kinds: (i) when women do not wear make-up and then are seen with make-up, and (ii) when women wear make-up and then are seen without make-up. A study is reported which shows that light make-up makes it easier to recognise a face, and heavy make-up makes it more difficult. Seeing initially a made-up face makes any subsequent facial recognition more difficult than initially seeing that face without make-up.

  17. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    PubMed Central

    Mitrovic, Aleksandra; Goller, Jürgen

    2016-01-01

    We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. PMID:27698984

  18. Identity modulates short-term memory for facial emotion.

    PubMed

    Galster, Murray; Kahana, Michael J; Wilson, Hugh R; Sekuler, Robert

    2009-12-01

    For some time, the relationship between processing of facial expression and facial identity has been in dispute. Using realistic synthetic faces, we reexamined this relationship for both perception and short-term memory. In Experiment 1, subjects tried to identify whether the emotional expression on a probe stimulus face matched the emotional expression on either of two remembered faces that they had just seen. The results showed that identity strongly influenced recognition short-term memory for emotional expression. In Experiment 2, subjects' similarity/dissimilarity judgments were transformed by multidimensional scaling (MDS) into a 2-D description of the faces' perceptual representations. Distances among stimuli in the MDS representation, which showed a strong linkage of emotional expression and facial identity, were good predictors of correct and false recognitions obtained previously in Experiment 1. The convergence of the results from Experiments 1 and 2 suggests that the overall structure and configuration of faces' perceptual representations may parallel their representation in short-term memory and that facial identity modulates the representation of facial emotion, both in perception and in memory. The stimuli from this study may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental.

  19. Appearance-Based Inferences Bias Source Memory

    PubMed Central

    Cassidy, Brittany S.; Zebrowitz, Leslie A.; Gutchess, Angela H.

    2012-01-01

    Previous research varying the trustworthiness of appearance has demonstrated that facial characteristics contribute to source memory. Two studies extended this work by investigating the contribution to source memory of babyfaceness, a facial quality known to elicit strong spontaneous trait inferences. Young adult participants viewed younger and older babyfaced and mature-faced individuals paired with sentences that were either congruent or incongruent with the target's facial characteristics. Identifying a source as dominant or submissive was least accurate when participants chose between a target whose behavior was incongruent with facial characteristics and a lure whose face mismatched the target in appearance, but matched the source memory question. In Study 1, this effect held true when identifying older sources, but not own-age, younger sources. When task difficulty was increased in Study 2, the relationship between face-behavior congruence and lure facial characteristics persisted, but it was not moderated by target age even though participants continued to correctly identify fewer older than younger sources. Taken together, these results indicate that trait expectations associated with variations in facial maturity can bias source memory for both own- and other-age faces, although own-age faces are less vulnerable to this bias, as shown in the moderation by task difficulty. PMID:22806429

  20. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  1. The Role of Facial Attractiveness and Facial Masculinity/Femininity in Sex Classification of Faces

    PubMed Central

    Hoss, Rebecca A.; Ramsey, Jennifer L.; Griffin, Angela M.; Langlois, Judith H.

    2005-01-01

    We tested whether adults (Experiment 1) and 4–5-year-old children (Experiment 2) identify the sex of high attractive faces faster and more accurately than low attractive faces in a reaction time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults’ sex classification of both female and male faces and children’s sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independent of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman’s (1990) averageness theory of attractiveness. PMID:16457167

  2. Interpretation of Appearance: The Effect of Facial Features on First Impressions and Personality

    PubMed Central

    Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner. PMID:25233221

  3. Interpretation of appearance: the effect of facial features on first impressions and personality.

    PubMed

    Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.

  4. Perception of face and body expressions using electromyography, pupillometry and gaze measures.

    PubMed

    Kret, Mariska E; Stekelenburg, Jeroen J; Roelofs, Karin; de Gelder, Beatrice

    2013-01-01

    Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants' fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion.

  5. Judgments of facial attractiveness as a combination of facial parts information over time: Social and aesthetic factors.

    PubMed

    Saegusa, Chihiro; Watanabe, Katsumi

    2016-02-01

    Facial attractiveness can be judged on the basis of visual information acquired in a very short duration, but the absolute level of attractiveness changes depending on the duration of the observation. However, how information from individual facial parts contributes to the judgment of whole-face attractiveness is unknown. In the current study, we examined how contributions of facial parts to the judgment of whole-face attractiveness would change over time. In separate sessions, participants evaluated the attractiveness of whole faces, as well as of the eyes, nose, and mouth after observing them for 20, 100, and 1,000 ms. Correlation and multiple regression analyses indicated that the eyes made a consistently high contribution to whole-face attractiveness, even with an observation duration of 20 ms, whereas the contribution of other facial parts increased as the observation duration grew longer. When the eyes were averted, the attractiveness ratings for the whole face were decreased marginally. In addition, the contribution advantage of the eyes at the 20-ms observation duration was diminished. We interpret these results to indicate that (a) eye gaze signals social attractiveness at the early stage (perhaps in combination with emotional expression), (b) other facial parts start contributing to the judgment of whole-face attractiveness by forming aesthetic attractiveness, and (c) there is a dynamic interplay between social and aesthetic attractiveness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  7. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  8. Perception of Face and Body Expressions Using Electromyography, Pupillometry and Gaze Measures

    PubMed Central

    Kret, Mariska E.; Stekelenburg, Jeroen J.; Roelofs, Karin; de Gelder, Beatrice

    2013-01-01

    Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important as well. In these experiments we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment and their facial reactions measured with electromyography. The results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, vice versa as well. From their facial expressions, it appeared that observers acted with signs of negative emotionality (increased corrugator activity) to angry and fearful facial expressions and with positive emotionality (increased zygomaticus) to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body facilitates the recognition of the emotion. PMID:23403886

  9. Not on the Face Alone: Perception of Contextualized Face Expressions in Huntington's Disease

    ERIC Educational Resources Information Center

    Aviezer, Hillel; Bentin, Shlomo; Hassin, Ran R.; Meschino, Wendy S.; Kennedy, Jeanne; Grewal, Sonya; Esmail, Sherali; Cohen, Sharon; Moscovitch, Morris

    2009-01-01

    Numerous studies have demonstrated that Huntington's disease mutation-carriers have deficient explicit recognition of isolated facial expressions. There are no studies, however, which have investigated the recognition of facial expressions embedded within an emotional body and scene context. Real life facial expressions are typically embedded in…

  10. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  11. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  12. Is moral beauty different from facial beauty? Evidence from an fMRI study.

    PubMed

    Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald

    2015-06-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Similarities and differences in Chinese and Caucasian adults' use of facial cues for trustworthiness judgments.

    PubMed

    Xu, Fen; Wu, Dingcheng; Toriyama, Rie; Ma, Fengling; Itakura, Shoji; Lee, Kang

    2012-01-01

    All cultural groups in the world place paramount value on interpersonal trust. Existing research suggests that although accurate judgments of another's trustworthiness require extensive interactions with the person, we often make trustworthiness judgments based on facial cues on the first encounter. However, little is known about what facial cues are used for such judgments and what the bases are on which individuals make their trustworthiness judgments. In the present study, we tested the hypothesis that individuals may use facial attractiveness cues as a "shortcut" for judging another's trustworthiness due to the lack of other more informative and in-depth information about trustworthiness. Using data-driven statistical models of 3D Caucasian faces, we compared facial cues used for judging the trustworthiness of Caucasian faces by Caucasian participants who were highly experienced with Caucasian faces, and the facial cues used by Chinese participants who were unfamiliar with Caucasian faces. We found that Chinese and Caucasian participants used similar facial cues to judge trustworthiness. Also, both Chinese and Caucasian participants used almost identical facial cues for judging trustworthiness and attractiveness. The results suggest that without opportunities to interact with another person extensively, we use the less racially specific and more universal attractiveness cues as a "shortcut" for trustworthiness judgments.

  14. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.

  15. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  16. The effects of postnatal maternal depression and anxiety on the processing of infant faces

    PubMed Central

    Arteche, Adriane; Joormann, Jutta; Harvey, Allison; Craske, Michelle; Gotlib, Ian H.; Lehtonen, Annukka; Counsell, Nicholas; Stein, Alan

    2011-01-01

    Background Postnatally depressed mothers have difficulties responding appropriately to their infants. The quality of the mother–child relationship depends on a mother's ability to respond to her infant's cues, which are largely non-verbal. Therefore, it is likely that difficulties in a mother's appraisal of her infants' facial expressions will affect the quality of mother–infant interaction. This study aimed to investigate the effects of postnatal depression and anxiety on the processing of infants' facial expressions. Method A total of 89 mothers, 34 with Generalised Anxiety Disorder, 21 with Major Depressive Disorder, and 34 controls, completed a ‘morphed infants’ faces task when their children were between 10 and 18 months. Results Overall, mothers were more likely to identify happy faces accurately and at lower intensity than sad faces. Depressed compared to control participants, however, were less likely to accurately identify happy infant faces. Interestingly, mothers with GAD tended to identify happy faces at a lower intensity than controls. There were no differences between the groups in relation to sad faces. Limitations Our sample was relatively small and further research is needed to investigate the links between mothers' perceptions of infant expressions and both maternal responsiveness and later measures of child development. Conclusion Our findings have potential clinical implications as the difficulties in the processing of positive facial expressions in depression may lead to less maternal responsiveness to positive affect in the offspring and may diminish the quality of the mother–child interactions. Results for participants with GAD are consistent with the literature demonstrating that persons with GAD are intolerant of uncertainty and seek reassurance due to their worries. PMID:21641652

  17. The effects of postnatal maternal depression and anxiety on the processing of infant faces.

    PubMed

    Arteche, Adriane; Joormann, Jutta; Harvey, Allison; Craske, Michelle; Gotlib, Ian H; Lehtonen, Annukka; Counsell, Nicholas; Stein, Alan

    2011-09-01

    Postnatally depressed mothers have difficulties responding appropriately to their infants. The quality of the mother-child relationship depends on a mother's ability to respond to her infant's cues, which are largely non-verbal. Therefore, it is likely that difficulties in a mother's appraisal of her infants' facial expressions will affect the quality of mother-infant interaction. This study aimed to investigate the effects of postnatal depression and anxiety on the processing of infants' facial expressions. A total of 89 mothers, 34 with Generalised Anxiety Disorder, 21 with Major Depressive Disorder, and 34 controls, completed a 'morphed infants' faces task when their children were between 10 and 18 months. Overall, mothers were more likely to identify happy faces accurately and at lower intensity than sad faces. Depressed compared to control participants, however, were less likely to accurately identify happy infant faces. Interestingly, mothers with GAD tended to identify happy faces at a lower intensity than controls. There were no differences between the groups in relation to sad faces. Our sample was relatively small and further research is needed to investigate the links between mothers' perceptions of infant expressions and both maternal responsiveness and later measures of child development. Our findings have potential clinical implications as the difficulties in the processing of positive facial expressions in depression may lead to less maternal responsiveness to positive affect in the offspring and may diminish the quality of the mother-child interactions. Results for participants with GAD are consistent with the literature demonstrating that persons with GAD are intolerant of uncertainty and seek reassurance due to their worries. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Concealing of facial expressions by a wild Barbary macaque (Macaca sylvanus).

    PubMed

    Thunström, Maria; Kuchenbuch, Paul; Young, Christopher

    2014-07-01

    Behavioural research on non-vocal communication among non-human primates and its possible links to the origin of human language is a long-standing research topic. Because human language is under voluntary control, it is of interest whether this is also true for any communicative signals of other species. It has been argued that the behaviour of hiding a facial expression with one's hand supports the idea that gestures might be under more voluntary control than facial expressions among non-human primates, and it has also been interpreted as a sign of intentionality. So far, the behaviour has only been reported twice, for single gorilla and chimpanzee individuals, both in captivity. Here, we report the first observation of concealing of facial expressions by a monkey, a Barbary macaque (Macaca sylvanus), living in the wild. On eight separate occasions between 2009 and 2011 an adult male was filmed concealing two different facial expressions associated with play and aggression ("play face" and "scream face"), 22 times in total. The videos were analysed in detail, including gaze direction, hand usage, duration, and individuals present. This male was the only individual in his group to manifest this behaviour, which always occurred in the presence of a dominant male. Several possible interpretations of the function of the behaviour are discussed. The observations in this study indicate that the gestural communication and cognitive abilities of monkeys warrant more research attention.

  19. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  20. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  1. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  2. Facial Movements Facilitate Part-Based, Not Holistic, Processing in Children, Adolescents, and Adults

    ERIC Educational Resources Information Center

    Xiao, Naiqi G.; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2017-01-01

    Although most of the faces we encounter daily are moving ones, much of what we know about face processing and its development is based on studies using static faces that emphasize holistic processing as the hallmark of mature face processing. Here the authors examined the effects of facial movements on face processing developmentally in children…

  3. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    ERIC Educational Resources Information Center

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  4. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  5. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved

  6. Women's hormone levels modulate the motivational salience of facial attractiveness and sexual dimorphism.

    PubMed

    Wang, Hongyi; Hahn, Amanda C; Fisher, Claire I; DeBruine, Lisa M; Jones, Benedict C

    2014-12-01

    The physical attractiveness of faces is positively correlated with both behavioral and neural measures of their motivational salience. Although previous work suggests that hormone levels modulate women's perceptions of others' facial attractiveness, studies have not yet investigated whether hormone levels also modulate the motivational salience of facial characteristics. To address this issue, we investigated the relationships between within-subject changes in women's salivary hormone levels (estradiol, progesterone, testosterone, and estradiol-to-progesterone ratio) and within-subject changes in the motivational salience of attractiveness and sexual dimorphism in male and female faces. The motivational salience of physically attractive faces in general and feminine female faces, but not masculine male faces, was greater in test sessions where women had high testosterone levels. Additionally, the reward value of sexually dimorphic faces in general and attractive female faces, but not attractive male faces, was greater in test sessions where women had high estradiol-to-progesterone ratios. These results provide the first evidence that the motivational salience of facial attractiveness and sexual dimorphism is modulated by within-woman changes in hormone levels. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. An optimized ERP brain-computer interface based on facial expression changes.

    PubMed

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  8. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  9. The Facial Appearance of CEOs: Faces Signal Selection but Not Performance.

    PubMed

    Stoker, Janka I; Garretsen, Harry; Spreeuwers, Luuk J

    2016-01-01

    Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs' faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance.

  10. Continuous noninvasive ventilation delivered by a novel total face mask: a case series report.

    PubMed

    Belchior, Inês; Gonçalves, Miguel R; Winck, João Carlos

    2012-03-01

    Noninvasive ventilation (NIV) has been widely used to decrease the complications associated with tracheal intubation in mechanically ventilated patients. However, nasal ulcerations may occur when conventional masks are used for continuous ventilation. A total face mask, which has no contact with the more sensitive areas of the face, is a possible option. We describe 3 patients with acute respiratory failure due to amyotrophic lateral sclerosis, who developed nasal bridge skin necrosis during continuous NIV, and one patient with post-extubation respiratory failure due to a high spinal cord injury, who had facial trauma with contraindication for conventional mask use. The total face mask was very well tolerated by all the patients, and permitted safe and efficient continuous NIV for several days until the acute respiratory failure episode resolved. None of the patients required endotracheal intubation during the acute episode.

  11. Anxiety disorders in adolescence are associated with impaired facial expression recognition to negative valence.

    PubMed

    Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus

    2012-02-01

    The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.

    PubMed

    Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J

    2018-06-12

    Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.

  13. A Comparison of Facial Color Pattern and Gazing Behavior in Canid Species Suggests Gaze Communication in Gray Wolves (Canis lupus)

    PubMed Central

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  14. Quantitative analysis of fetal facial morphology using 3D ultrasound and statistical shape modeling: a feasibility study.

    PubMed

    Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C

    2017-07-01

    The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4  weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Pilot study of facial soft tissue thickness differences among three skeletal classes in Japanese females.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Yoshino, Mineo; Oohigashi, Shina; Miyazawa, Hiroo; Inoue, Katsuhiro

    2010-02-25

    Facial reconstruction is a technique used in forensic anthropology to estimate the appearance of the antemortem face from unknown human skeletal remains. This requires accurate skull assessment (for variables such as age, sex, and race) and soft tissue thickness data. However, the skull can provide only limited information, and further data are needed to reconstruct the face. The authors herein obtained further information from the skull in order to reconstruct the face more accurately. Skulls can be classified into three facial types on the basis of orthodontic skeletal classes (namely, straight facial profile, type I, convex facial profile, type II, and concave facial profile, type III). This concept was applied to facial tissue measurement and soft tissue depth was compared in each skeletal class in a Japanese female population. Differences of soft tissue depth between skeletal classes were observed, and this information may enable more accurate reconstruction than sex-specific depth alone. 2009 Elsevier Ireland Ltd. All rights reserved.

  16. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness

    PubMed Central

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness. PMID:27148111

  17. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    PubMed

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  19. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  20. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  1. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  2. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  3. Exploring the Role of Spatial Frequency Information during Neural Emotion Processing in Human Infants.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-01-01

    Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.

  4. Hybrid generative-discriminative approach to age-invariant face recognition

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad; Shafique, Tamoor

    2018-03-01

    Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.

  5. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  6. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Effects of dynamic information in recognising facial expressions on dimensional and categorical judgments.

    PubMed

    Fujimura, Tomomi; Suzuki, Naoto

    2010-01-01

    We investigated the effects of dynamic information on decoding facial expressions. A dynamic face entailed a change from a neutral to a full-blown expression, whereas a static face included only the full-blown expression. Sixty-eight participants were divided into two groups, the dynamic condition and the static condition. The facial stimuli expressed eight kinds of emotions (excited, happy, calm, sleepy, sad, angry, fearful, and surprised) according to a dimensional perspective. Participants evaluated each facial stimulus using two methods, the Affect Grid (Russell et al, 1989 Personality and Social Psychology 29 497-510) and the forced-choice task, allowing for dimensional and categorical judgment interpretations. For activation ratings in dimensional judgments, the results indicated that dynamic calm faces, low-activation expressions were rated as less activated than static faces. For categorical judgments, dynamic excited, happy, and fearful faces, which are high- and middle-activation expressions, had higher ratings than did those under the static condition. These results suggest that the beneficial effect of dynamic information depends on the emotional properties of facial expressions.

  8. Facial color is an efficient mechanism to visually transmit emotion

    PubMed Central

    Benitez-Quiroz, Carlos F.; Srinivasan, Ramprakash

    2018-01-01

    Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. PMID:29555780

  9. Facial color is an efficient mechanism to visually transmit emotion.

    PubMed

    Benitez-Quiroz, Carlos F; Srinivasan, Ramprakash; Martinez, Aleix M

    2018-04-03

    Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion. Copyright © 2018 the Author(s). Published by PNAS.

  10. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  11. Personality judgments from everyday images of faces

    PubMed Central

    Sutherland, Clare A. M.; Rowley, Lauren E.; Amoaku, Unity T.; Daguzan, Ella; Kidd-Rossiter, Kate A.; Maceviciute, Ugne; Young, Andrew W.

    2015-01-01

    People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying “ambient image” face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability, and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling. PMID:26579008

  12. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  13. Why do fearful facial expressions elicit behavioral approach? Evidence from a combined approach-avoidance implicit association test.

    PubMed

    Hammer, Jennifer L; Marsh, Abigail A

    2015-04-01

    Despite communicating a "negative" emotion, fearful facial expressions predominantly elicit behavioral approach from perceivers. It has been hypothesized that this seemingly paradoxical effect may occur due to fearful expressions' resemblance to vulnerable, infantile faces. However, this hypothesis has not yet been tested. We used a combined approach-avoidance/implicit association test (IAT) to test this hypothesis. Participants completed an approach-avoidance lever task during which they responded to fearful and angry facial expressions as well as neutral infant and adult faces presented in an IAT format. Results demonstrated an implicit association between fearful facial expressions and infant faces and showed that both fearful expressions and infant faces primarily elicit behavioral approach. The dominance of approach responses to both fearful expressions and infant faces decreased as a function of psychopathic personality traits. Results suggest that the prosocial responses to fearful expressions observed in most individuals may stem from their associations with infantile faces. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. Gender identity rather than sexual orientation impacts on facial preferences.

    PubMed

    Ciocca, Giacomo; Limoncin, Erika; Cellerino, Alessandro; Fisher, Alessandra D; Gravina, Giovanni Luca; Carosa, Eleonora; Mollaioli, Daniele; Valenzano, Dario R; Mennucci, Andrea; Bandini, Elisa; Di Stasi, Savino M; Maggi, Mario; Lenzi, Andrea; Jannini, Emmanuele A

    2014-10-01

    Differences in facial preferences between heterosexual men and women are well documented. It is still a matter of debate, however, how variations in sexual identity/sexual orientation may modify the facial preferences. This study aims to investigate the facial preferences of male-to-female (MtF) individuals with gender dysphoria (GD) and the influence of short-term/long-term relationships on facial preference, in comparison with healthy subjects. Eighteen untreated MtF subjects, 30 heterosexual males, 64 heterosexual females, and 42 homosexual males from university students/staff, at gay events, and in Gender Clinics were shown a composite male or female face. The sexual dimorphism of these pictures was stressed or reduced in a continuous fashion through an open-source morphing program with a sequence of 21 pictures of the same face warped from a feminized to a masculinized shape. An open-source morphing program (gtkmorph) based on the X-Morph algorithm. MtF GD subjects and heterosexual females showed the same pattern of preferences: a clear preference for less dimorphic (more feminized) faces for both short- and long-term relationships. Conversely, both heterosexual and homosexual men selected significantly much more dimorphic faces, showing a preference for hyperfeminized and hypermasculinized faces, respectively. These data show that the facial preferences of MtF GD individuals mirror those of the sex congruent with their gender identity. Conversely, heterosexual males trace the facial preferences of homosexual men, indicating that changes in sexual orientation do not substantially affect preference for the most attractive faces. © 2014 International Society for Sexual Medicine.

  15. Fixation Patterns of Chinese Participants while Identifying Facial Expressions on Chinese Faces

    PubMed Central

    Xia, Mu; Li, Xueliu; Zhong, Haiqing; Li, Hong

    2017-01-01

    Two experiments in this study were designed to explore a model of Chinese fixation with four types of native facial expressions—happy, peaceful, sad, and angry. In both experiments, participants performed an emotion recognition task while their behaviors and eye movements were recorded. Experiment 1 (24 participants, 12 men) demonstrated that both eye fixations and durations were lower for the upper part of the face than for the lower part of the face for all four types of facial expression. Experiment 2 (20 participants, 6 men) repeated this finding and excluded the disturbance of fixation point. These results indicate that Chinese participants demonstrated a superiority effect for the lower part of face while interpreting facial expressions, possibly due to the influence of eastern etiquette culture. PMID:28446896

  16. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  17. Electrophysiological correlates of facial decision: insights from upright and upside-down Mooney-face perception.

    PubMed

    George, Nathalie; Jemel, Boutheina; Fiori, Nicole; Chaby, Laurence; Renault, Bernard

    2005-08-01

    We investigated the ERP correlates of the subjective perception of upright and upside-down ambiguous pictures as faces using two-tone Mooney stimuli in an explicit facial decision task (deciding whether a face is perceived or not in the display). The difficulty in perceiving upside-down Mooneys as faces was reflected by both lower rates of "Face" responses and delayed "Face" reaction times for upside-down relative to upright stimuli. The N170 was larger for the stimuli reported as "faces". It was also larger for the upright than the upside-down stimuli only when they were reported as faces. Furthermore, facial decision as well as stimulus orientation effects spread from 140-190 ms to 390-440 ms. The behavioural delay in 'Face' responses to upside-down stimuli was reflected in ERPs by later effect of facial decision for upside-down relative to upright Mooneys over occipito-temporal electrodes. Moreover, an orientation effect was observed only for the stimuli reported as faces; it yielded a marked hemispheric asymmetry, lasting from 140-190 ms to 390-440 ms post-stimulus onset in the left hemisphere and from 340-390 to 390-440 ms only in the right hemisphere. Taken together, the results supported a preferential involvement of the right hemisphere in the detection of faces, whatever their orientation. By contrast, the early orientation effect in the left hemisphere suggested that upside-down Mooney stimuli were processed as non face objects until facial decision was reached in this hemisphere. The present data show that face perception involves not only spatially but also temporally distributed activities in occipito-temporal regions.

  18. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Residual fMRI sensitivity for identity changes in acquired prosopagnosia.

    PubMed

    Fox, Christopher J; Iaria, Giuseppe; Duchaine, Bradley C; Barton, Jason J S

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception.

  20. Residual fMRI sensitivity for identity changes in acquired prosopagnosia

    PubMed Central

    Fox, Christopher J.; Iaria, Giuseppe; Duchaine, Bradley C.; Barton, Jason J. S.

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception. PMID:24151479

  1. A longitudinal study of facial growth of Southern Chinese in Hong Kong: Comprehensive photogrammetric analyses

    PubMed Central

    Wen, Yi Feng; McGrath, Colman Patrick

    2017-01-01

    Introduction Existing studies on facial growth were mostly cross-sectional in nature and only a limited number of facial measurements were investigated. The purposes of this study were to longitudinally investigate facial growth of Chinese in Hong Kong from 12 through 15 to 18 years of age and to compare the magnitude of growth changes between genders. Methods and findings Standardized frontal and lateral facial photographs were taken from 266 (149 females and 117 males) and 265 (145 females and 120 males) participants, respectively, at all three age levels. Linear and angular measurements, profile inclinations, and proportion indices were recorded. Statistical analyses were performed to investigate growth changes of facial features. Comparisons were made between genders in terms of the magnitude of growth changes from ages 12 to 15, 15 to 18, and 12 to 18 years. For the overall face, all linear measurements increased significantly (p < 0.05) except for height of the lower profile in females (p = 0.069) and width of the face in males (p = 0.648). In both genders, the increase in height of eye fissure was around 10% (p < 0.001). There was significant decrease in nasofrontal angle (p < 0.001) and increase in nasofacial angle (p < 0.001) in both genders and these changes were larger in males. Vermilion-total upper lip height index remained stable in females (p = 0.770) but increased in males (p = 0.020). Nasofrontal angle (effect size: 0.55) and lower vermilion contour index (effect size: 0.59) demonstrated large magnitude of gender difference in the amount of growth changes from 12 to 18 years. Conclusions Growth changes of facial features and gender differences in the magnitude of facial growth were determined. The findings may benefit different clinical specialties and other nonclinical fields where facial growth are of interest. PMID:29053713

  2. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Photogrammetric Analysis of Attractiveness in Indian Faces

    PubMed Central

    Duggal, Shveta; Kapoor, DN; Verma, Santosh; Sagar, Mahesh; Lee, Yung-Seop; Moon, Hyoungjin

    2016-01-01

    Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population. PMID:27019809

  4. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.

    PubMed

    Balconi, Michela; Canavesio, Ylenia

    2016-01-01

    The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.

  5. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  6. The Facial Appearance of CEOs: Faces Signal Selection but Not Performance

    PubMed Central

    Garretsen, Harry; Spreeuwers, Luuk J.

    2016-01-01

    Research overwhelmingly shows that facial appearance predicts leader selection. However, the evidence on the relevance of faces for actual leader ability and consequently performance is inconclusive. By using a state-of-the-art, objective measure for face recognition, we test the predictive value of CEOs’ faces for firm performance in a large sample of faces. We first compare the faces of Fortune500 CEOs with those of US citizens and professors. We find clear confirmation that CEOs do look different when compared to citizens or professors, replicating the finding that faces matter for selection. More importantly, we also find that faces of CEOs of top performing firms do not differ from other CEOs. Based on our advanced face recognition method, our results suggest that facial appearance matters for leader selection but that it does not do so for leader performance. PMID:27462986

  7. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Simple solution for difficult face mask ventilation in children with orofacial clefts.

    PubMed

    Veerabathula, Prardhana; Patil, Manajeet; Upputuri, Omkar; Durga, Padmaja

    2014-10-01

    Significant air leak from the facial cleft predisposes to difficult mask ventilation. The reported techniques of use of sterile gauze, larger face mask and laryngeal mask airway after intravenous induction have limited application in uncooperative children. We describe the use of dental impression material molded to the facial contour to cover the facial defect and aid ventilation with an appropriate size face mask in a child with a bilateral Tessier 3 anomaly. © 2014 John Wiley & Sons Ltd.

  9. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    PubMed Central

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  10. Improving the Quality of Facial Composites Using a Holistic Cognitive Interview

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.

    2008-01-01

    Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…

  11. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  12. Static and Dynamic Facial Cues Differentially Affect the Consistency of Social Evaluations.

    PubMed

    Hehman, Eric; Flake, Jessica K; Freeman, Jonathan B

    2015-08-01

    Individuals are quite sensitive to others' appearance cues when forming social evaluations. Cues such as facial emotional resemblance are based on facial musculature and thus dynamic. Cues such as a face's structure are based on the underlying bone and are thus relatively static. The current research examines the distinction between these types of facial cues by investigating the consistency in social evaluations arising from dynamic versus static cues. Specifically, across four studies using real faces, digitally generated faces, and downstream behavioral decisions, we demonstrate that social evaluations based on dynamic cues, such as intentions, have greater variability across multiple presentations of the same identity than do social evaluations based on static cues, such as ability. Thus, although evaluations of intentions vary considerably across different instances of a target's face, evaluations of ability are relatively fixed. The findings highlight the role of facial cues' consistency in the stability of social evaluations. © 2015 by the Society for Personality and Social Psychology, Inc.

  13. A Comparison of the Local Flap and Skin Graft by Location of Face in Reconstruction after Resection of Facial Skin Cancer.

    PubMed

    Lee, Kyung Suk; Kim, Jun Oh; Kim, Nam Gyun; Lee, Yoon Jung; Park, Young Ji; Kim, Jun Sik

    2017-12-01

    Surgery for reconstruction of defects after surgery should be performed selectively and the many points must be considered. The authors conducted this study to compare the local flap and skin graft by facial location in the reconstruction after resection of facial skin cancer. The authors performed the study in patients that had received treatment in Department of Plastic Surgery, Gyeongsang National University. The cases were analyzed according to the reconstruction methods for the defects after surgery, sex, age, tumor site, and tumor size. Additionally, the authors compared differences of aesthetic satisfaction (out of 5 points) of patients in the local flap and skin graft by facial location after resection of facial skin cancer by dividing the face into eight areas. A total of 153 cases were confirmed. The most common facial skin cancer was basal cell carcinoma (56.8%, 87 cases), followed by squamous cell carcinoma (37.2%, 57 cases) and bowen's disease (5.8%, 9 cases). The most common reconstruction method was local flap 119 cases (77.7%), followed by skin graft 34 cases (22.3%). 86 patients answered the questionnaire and mean satisfaction of the local flap and skin graft were 4.3 and 3.5 ( p =0.04), respectively, indicating that satisfaction of local flap was significantly high. When comparing satisfaction of patients according to results, local flap shows excellent effects in functional and cosmetic aspects would be able to provide excellent results rather than using a skin graft with poor touch and tone compared to the surrounding normal skin.

  14. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  15. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    PubMed

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Influence of gravity upon some facial signs.

    PubMed

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  17. Does skull shape mediate the relationship between objective features and subjective impressions about the face?

    PubMed

    Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2013-10-01

    In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    PubMed

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  19. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    PubMed Central

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  20. Factors Influencing Perception of Facial Attractiveness: Gender and Dental Education.

    PubMed

    Jung, Ga-Hee; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Kook, Min-Suk

    2018-03-01

    This study was conducted to investigate the gender- and dental education-specific differences in perception of facial attractiveness for varying ratio of lower face contour. Two hundred eleven students (110 male respondents and 110 female respondents; aged between 20-38 years old) were requested to rate facial figures with alterations to the bigonial width and the vertical length of the lower face. We produced a standard figure which is based on the "golden ratio" and 4 additional series of figures with either horizontal or vertical alterations to the contour of lower face. The preference for each figure was evaluated using a Visual Analog Scale. The Kruskal Wallis test was used for differences in the preferences for each figure and the Mann-Whitney U test was used to evaluate gender-specific differences and differences by dental education. In general, the highest preference score was indicated for the standard figure, whereas facial figure with large bigonial width and chin length had the lowest score.Male respondents showed significantly higher preference score for facial contour that had a 0.1 proportional increase in the facial height-bigonial width ratio over that of the standard figure.For horizontal alterations to the facial profiles, there were no significant differences in the preferences by the level of dental education. For vertically altered images, the average Visual Analog Scale was significantly lower among the dentally-educated for facial image that had a proportional 0.22 and 0.42 increase in the ratio between the vertical length of the chin and the lip. Generally, the standard image based on the golden ratio was the most. Slender face was appealed more to males than to females, and facial image with an increased lower facial height were perceived to be much less attractive to the dentally-educated respondents, which suggests that the dental education might have some influence in sensitivity to vertical changes in lower face.

  1. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  2. Diprosopia revisited in light of the recognized role of neural crest cells in facial development.

    PubMed

    Carles, D; Weichhold, W; Alberti, E M; Léger, F; Pigeau, F; Horovitz, J

    1995-01-01

    The aim of this study is to compare the theory of embryogenesis of the face with human diprosopia. This peculiar form of conjoined twinning is of great interest because 1) only the facial structures are duplicated and 2) almost all cases have a rather monomorphic pattern. The hypothesis is that an initial duplication of the notochord leads to two neural plates and subsequently duplicated neural crests. In those conditions, derivatives of the neural crests will be partially or totally duplicated; therefore, in diprosopia, the duplicated facial structures would be considered to be neural crest derivatives. If these structures are identical to those that are experimentally demonstrated to be neural crest derivatives in animals, these findings are an argument to apply this theory of facial embryogenesis in man. Serial horizontal sections of the face of two diprosopic fetuses (11 and 21 weeks gestation) were studied macro- and microscopically to determine the external and internal structures that are duplicated. Complete postmortem examination was performed in search for additional malformations. The face of both fetuses showed a very similar morphologic pattern with duplication of ocular, nasal, and buccal structures. The nasal fossae and the anterior part of the tongue were also duplicated, albeit the posterior part and the pharyngolaryngeal structures were unique. Additional facial clefts were present in both fetuses. Extrafacial anomalies were represented by a craniorachischisis, two fused vertebral columns and, in the older fetus, by a complex cardiac malformation morphologically identical to malformations induced by removal or grafting of additional cardiac neural crest cells in animals. These pathological findings could identify the facial structures that are neural crest derivatives in man. They are similar to those experimentally demonstrated to be neural crest derivatives in animals. In this respect, diprosopia could be considered as the end of a spectrum, whereas the other end is agnathia-holoprosencephaly complex. This assumption has to be discussed, but we want to draw attention to the fact that diprosopia must not be considered as a curious form of conjoined twinning, but as a major means of bringing us a better knowledge of the facial embryogenesis in man.

  3. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    ERIC Educational Resources Information Center

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  4. Shades of Emotion: What the Addition of Sunglasses or Masks to Faces Reveals about the Development of Facial Expression Processing

    ERIC Educational Resources Information Center

    Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa

    2012-01-01

    Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…

  5. Reading Faces: From Features to Recognition.

    PubMed

    Guntupalli, J Swaroop; Gobbini, M Ida

    2017-12-01

    Chang and Tsao recently reported that the monkey face patch system encodes facial identity in a space of facial features as opposed to exemplars. Here, we discuss how such coding might contribute to face recognition, emphasizing the critical role of learning and interactions with other brain areas for optimizing the recognition of familiar faces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Current Trends in Facial Rejuvenation: An Assessment of ASPS Members' Use of Fat Grafting during Face Lifting.

    PubMed

    Sinno, Sammy; Mehta, Karan; Reavey, Patrick L; Simmons, Christopher; Stuzin, James M

    2015-07-01

    Fat grafting can be used to improve the results of face lifting. The extent to which plastic surgeons use fat grafting in their face-lift practices is unknown. The goals of this study were to understand the current use of fat grafting during facial rejuvenation surgery and identify the most common techniques used. A 28-item questionnaire was formulated for distribution to a randomized cohort of American Society of Plastic Surgeons members. Data were collected and statistically analyzed using Pearson chi-square and Fisher's exact tests. A total of 309 questionnaires were collected. The questionnaire revealed that 85.2 percent of respondents use fat grafting during face lifts. Currently, the most common techniques used include abdominal harvest, centrifuge processing, blunt cannula injection without pretunneling, and placing less than 0.1 cc per pass. The deep central malar, lower lid cheek junction, and nasolabial folds are the most commonly injected areas. Combining surgical repositioning of fat with fat grafting offers surgeons a greater degree of aesthetic control for correcting contour in the aging face. Although there is controversy regarding the best method to surgically reposition fat, there is a growing consensus that volume augmentation is preferred by most face-lift surgeons.

  7. Are Happy Faces Attractive? The Roles of Early vs. Late Processing

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Fan, Jintu; Wu, Yi; Lee, Tatia M. C.

    2015-01-01

    Facial attractiveness is closely related to romantic love. To understand if the neural underpinnings of perceived facial attractiveness and facial expression are similar constructs, we recorded neural signals using an event-related potential (ERP) methodology for 20 participants who were viewing faces with varied attractiveness and expressions. We found that attractiveness and expression were reflected by two early components, P2-lateral (P2l) and P2-medial (P2m), respectively; their interaction effect was reflected by LPP, a late component. The findings suggested that facial attractiveness and expression are first processed in parallel for discrimination between stimuli. After the initial processing, more attentional resources are allocated to the faces with the most positive or most negative valence in both the attractiveness and expression dimensions. The findings contribute to the theoretical model of face perception. PMID:26648885

  8. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  9. The processing of facial identity and expression is interactive, but dependent on task and experience

    PubMed Central

    Yankouskaya, Alla; Humphreys, Glyn W.; Rotshtein, Pia

    2014-01-01

    Facial identity and emotional expression are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of an unresolved debate for the past three decades. Three views have been advocated: (1) separate and parallel processing of identity and emotional expression signals derived from faces; (2) asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3) integrated processing of facial identity and emotion. We present studies with healthy participants that primarily apply methods from mathematical psychology, formally testing the relations between the processing of facial identity and emotion. Specifically, we focused on the “Garner” paradigm, the composite face effect and the divided attention tasks. We further ask whether the architecture of face-related processes is fixed or flexible and whether (and how) it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions. We propose that this change occurs as integrative processes are more efficient than parallel. Finally, we argue that the dynamic aspects of face processing need to be incorporated into theories in this field. PMID:25452722

  10. Soft-tissue facial characteristics of attractive Chinese men compared to normal men.

    PubMed

    Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing

    2015-01-01

    To compare the facial characteristics of attractive Chinese men with those of reference men. The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 "attractive" men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces.

  11. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  12. Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine

    PubMed Central

    Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang

    2014-01-01

    Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342

  13. Aging disrupts the neural transformations that link facial identity across views.

    PubMed

    Habak, Claudine; Wilkinson, Frances; Wilson, Hugh R

    2008-01-01

    Healthy human aging can have adverse effects on cortical function and on the brain's ability to integrate visual information to form complex representations. Facial identification is crucial to successful social discourse, and yet, it remains unclear whether the neuronal mechanisms underlying face perception per se, and the speed with which they process information, change with age. We present face images whose discrimination relies strictly on the shape and geometry of a face at various stimulus durations. Interestingly, we demonstrate that facial identity matching is maintained with age when faces are shown in the same view (e.g., front-front or side-side), regardless of exposure duration, but degrades when faces are shown in different views (e.g., front and turned 20 degrees to the side) and does not improve at longer durations. Our results indicate that perceptual processing speed for complex representations and the mechanisms underlying same-view facial identity discrimination are maintained with age. In contrast, information is degraded in the neural transformations that represent facial identity across views. We suggest that the accumulation of useful information over time to refine a representation within a population of neurons saturates earlier in the aging visual system than it does in the younger system and contributes to the age-related deterioration of face discrimination across views.

  14. Facial first impressions and partner preference models: Comparable or distinct underlying structures?

    PubMed

    South Palomares, Jennifer K; Sutherland, Clare A M; Young, Andrew W

    2017-12-17

    Given the frequency of relationships nowadays initiated online, where impressions from face photographs may influence relationship initiation, it is important to understand how facial first impressions might be used in such contexts. We therefore examined the applicability of a leading model of verbally expressed partner preferences to impressions derived from real face images and investigated how the factor structure of first impressions based on potential partner preference-related traits might relate to a more general model of facial first impressions. Participants rated 1,000 everyday face photographs on 12 traits selected to represent (Fletcher, et al. 1999, Journal of Personality and Social Psychology, 76, 72) verbal model of partner preferences. Facial trait judgements showed an underlying structure that largely paralleled the tripartite structure of Fletcher et al.'s verbal preference model, regardless of either face gender or participant gender. Furthermore, there was close correspondence between the verbal partner preference model and a more general tripartite model of facial first impressions derived from a different literature (Sutherland et al., 2013, Cognition, 127, 105), suggesting an underlying correspondence between verbal conceptual models of romantic preferences and more general models of facial first impressions. © 2017 The British Psychological Society.

  15. Quality-of-life improvement after free gracilis muscle transfer for smile restoration in patients with facial paralysis.

    PubMed

    Lindsay, Robin W; Bhama, Prabhat; Hadlock, Tessa A

    2014-01-01

    Facial paralysis can contribute to disfigurement, psychological difficulties, and an inability to convey emotion via facial expression. In patients unable to perform a meaningful smile, free gracilis muscle transfer (FGMT) can often restore smile function. However, little is known about the impact on disease-specific quality of life. To determine quantitatively whether FGMT improves quality of life in patients with facial paralysis. Prospective evaluation of 154 FGMTs performed at a facial nerve center on 148 patients with facial paralysis. The Facial Clinimetric Evaluation (FaCE) survey and Facial Assessment by Computer Evaluation software (FACE-gram) were used to quantify quality-of-life improvement, oral commissure excursion, and symmetry with smile. Free gracilis muscle transfer. Change in FaCE score, oral commissure excursion, and symmetry with smile. There were 127 successful FGMTs on 124 patients and 14 failed procedures on 13 patients. Mean (SD) FaCE score increased significantly after successful FGMT (42.30 [15.9] vs 58.5 [17.60]; paired 2-tailed t test, P < .001). Mean (SD) FACE scores improved significantly in all subgroups (nonflaccid cohort, 37.8 [19.9] vs 52.9 [19.3]; P = .02; flaccid cohort, 43.1 [15.1] vs 59.6 [17.2]; P < .001; trigeminal innervation cohort, 38.9 [14.6] vs 55.2 [18.2]; P < .001; cross-face nerve graft cohort, 47.3 [16.6] vs 61.7 [16.9]; P < .001) except the failure cohort (36.5 [20.8] vs 33.5 [17.9]; Wilcoxon signed-rank test, P = .15). Analysis of 40 patients' photographs revealed a mean (SD) preoperative and postoperative excursion on the affected side of -0.88 (3.79) and 7.68 (3.38), respectively (P < .001); symmetry with smile improved from a mean (SD) of 13.8 (7.46) to 4.88 (3.47) (P < .001). Free gracilis muscle transfer has become a mainstay in the management armamentarium for patients with severe reduction in oral commissure movement after facial nerve insult and recovery. We found a quantitative improvement in quality of life after FGMT in patients who could not recover a meaningful smile after facial nerve insult. Quality-of-life improvement was not statistically different between donor nerve groups or facial paralysis types.

  16. Effects of damping head movement and facial expression in dyadic conversation using real–time facial expression tracking and synthesized avatars

    PubMed Central

    Boker, Steven M.; Cohn, Jeffrey F.; Theobald, Barry-John; Matthews, Iain; Brick, Timothy R.; Spies, Jeffrey R.

    2009-01-01

    When people speak with one another, they tend to adapt their head movements and facial expressions in response to each others' head movements and facial expressions. We present an experiment in which confederates' head movements and facial expressions were motion tracked during videoconference conversations, an avatar face was reconstructed in real time, and naive participants spoke with the avatar face. No naive participant guessed that the computer generated face was not video. Confederates' facial expressions, vocal inflections and head movements were attenuated at 1 min intervals in a fully crossed experimental design. Attenuated head movements led to increased head nods and lateral head turns, and attenuated facial expressions led to increased head nodding in both naive participants and confederates. Together, these results are consistent with a hypothesis that the dynamics of head movements in dyadicconversation include a shared equilibrium. Although both conversational partners were blind to the manipulation, when apparent head movement of one conversant was attenuated, both partners responded by increasing the velocity of their head movements. PMID:19884143

  17. Morphological Integration of Soft-Tissue Facial Morphology in Down Syndrome and Siblings

    PubMed Central

    Starbuck, John; Reeves, Roger H.; Richtsmeier, Joan

    2011-01-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6–12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. PMID:21996933

  18. Morphological integration of soft-tissue facial morphology in Down Syndrome and siblings.

    PubMed

    Starbuck, John; Reeves, Roger H; Richtsmeier, Joan

    2011-12-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6-12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. 2011 Wiley Periodicals, Inc.

  19. Viewing distance matter to perceived intensity of facial expressions

    PubMed Central

    Gerhardsson, Andreas; Högman, Lennart; Fischer, Håkan

    2015-01-01

    In our daily perception of facial expressions, we depend on an ability to generalize across the varied distances at which they may appear. This is important to how we interpret the quality and the intensity of the expression. Previous research has not investigated whether this so called perceptual constancy also applies to the experienced intensity of facial expressions. Using a psychophysical measure (Borg CR100 scale) the present study aimed to further investigate perceptual constancy of happy and angry facial expressions at varied sizes, which is a proxy for varying viewing distances. Seventy-one (42 females) participants rated the intensity and valence of facial expressions varying in distance and intensity. The results demonstrated that the perceived intensity (PI) of the emotional facial expression was dependent on the distance of the face and the person perceiving it. An interaction effect was noted, indicating that close-up faces are perceived as more intense than faces at a distance and that this effect is stronger the more intense the facial expression truly is. The present study raises considerations regarding constancy of the PI of happy and angry facial expressions at varied distances. PMID:26191035

  20. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    PubMed

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  1. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions

    PubMed Central

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions. PMID:26090223

  2. Bidirectional Gender Face Aftereffects: Evidence Against Normative Facial Coding.

    PubMed

    Cronin, Sophie L; Spence, Morgan L; Miller, Paul A; Arnold, Derek H

    2017-02-01

    Facial appearance can be altered, not just by restyling but also by sensory processes. Exposure to a female face can, for instance, make subsequent faces look more masculine than they would otherwise. Two explanations exist. According to one, exposure to a female face renormalizes face perception, making that female and all other faces look more masculine as a consequence-a unidirectional effect. According to that explanation, exposure to a male face would have the opposite unidirectional effect. Another suggestion is that face gender is subject to contrastive aftereffects. These should make some faces look more masculine than the adaptor and other faces more feminine-a bidirectional effect. Here, we show that face gender aftereffects are bidirectional, as predicted by the latter hypothesis. Images of real faces rated as more and less masculine than adaptors at baseline tended to look even more and less masculine than adaptors post adaptation. This suggests that, rather than mental representations of all faces being recalibrated to better reflect the prevailing statistics of the environment, mental operations exaggerate differences between successive faces, and this can impact facial gender perception.

  3. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    PubMed

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  4. The Helsinki Face Transplantation: Surgical aspects and 1-year outcome.

    PubMed

    Lassus, Patrik; Lindford, Andrew; Vuola, Jyrki; Bäck, Leif; Suominen, Sinikka; Mesimäki, Karri; Wilkman, Tommy; Ylä-Kotola, Tuija; Tukiainen, Erkki; Kuokkanen, Hannu; Törnwall, Jyrki

    2018-02-01

    Since 2005, at least 38 facial transplantations have been performed worldwide. We herein describe the surgical technique and 1-year clinical outcome in Finland's first face transplant case. A 34-year-old male who had a severe facial deformity following ballistic trauma in 1999 underwent facial transplantation at the Helsinki University Hospital on 8th February 2016. Three-dimensional (3D) technology was used to manufacture donor and recipient patient-specific osteotomy guides and a donor face mask. The facial transplant consisted of a Le Fort II maxilla, central mandible, lower ⅔ of the midface muscles, facial and neck skin, oral mucosa, anterior tongue and floor of mouth muscles, facial nerve (three bilateral branches), and bilateral hypoglossal and buccal nerves. At 1-year follow-up, there have thus far been no clinical or histological signs of rejection. The patient has a good aesthetic outcome with symmetrical restoration of the mobile central part of the face, with recovery of pain and light touch sensation to almost the entire facial skin and intraoral mucosa. Electromyography at 1 year has confirmed symmetrical muscle activity in the floor of the mouth and facial musculature, and the patient is able to produce spontaneous smile. Successful social and psychological outcome has also been observed. Postoperative complications requiring intervention included early (nasopalatinal fistula, submandibular sialocele, temporomandibular joint pain and transient type 2 diabetes) and late (intraoral wound and fungal infection, renal impairment and hypertension) complications. At 1 year, we report an overall good functional outcome in Finland's first face transplant. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Impact of facial defect reconstruction on attractiveness and negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick; Ishii, Lisa E

    2015-06-01

    Measure the impact of facial defect reconstruction on observer-graded attractiveness and negative facial perception. Prospective, randomized, controlled experiment. One hundred twenty casual observers viewed images of faces with defects of varying sizes and locations before and after reconstruction as well as normal comparison faces. Observers rated attractiveness, defect severity, and how disfiguring, bothersome, and important to repair they considered each face. Facial defects decreased attractiveness -2.26 (95% confidence interval [CI]: -2.45, -2.08) on a 10-point scale. Mixed effects linear regression showed this attractiveness penalty varied with defect size and location, with large and central defects generating the greatest penalty. Reconstructive surgery increased attractiveness 1.33 (95% CI: 1.18, 1.47), an improvement dependent upon size and location, restoring some defect categories to near normal ranges of attractiveness. Iterated principal factor analysis indicated the disfiguring, important to repair, bothersome, and severity variables were highly correlated and measured a common domain; thus, they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score, representing negative facial perception. The DIBS regression showed defect faces have a 1.5 standard deviation increase in negative perception (DIBS: 1.69, 95% CI: 1.61, 1.77) compared to normal faces, which decreased by a similar magnitude after surgery (DIBS: -1.44, 95% CI: -1.49, -1.38). These findings varied with defect size and location. Surgical reconstruction of facial defects increased attractiveness and decreased negative social facial perception, an impact that varied with defect size and location. These new social perception data add to the evidence base demonstrating the value of high-quality reconstructive surgery. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  6. The many faces of a face: Comparing stills and videos of facial expressions in eight dimensions (SAVE database).

    PubMed

    Garrido, Margarida V; Lopes, Diniz; Prada, Marília; Rodrigues, David; Jerónimo, Rita; Mourão, Rui P

    2017-08-01

    This article presents subjective rating norms for a new set of Stills And Videos of facial Expressions-the SAVE database. Twenty nonprofessional models were filmed while posing in three different facial expressions (smile, neutral, and frown). After each pose, the models completed the PANAS questionnaire, and reported more positive affect after smiling and more negative affect after frowning. From the shooting material, stills and 5 s and 10 s videos were edited (total stimulus set = 180). A different sample of 120 participants evaluated the stimuli for attractiveness, arousal, clarity, genuineness, familiarity, intensity, valence, and similarity. Overall, facial expression had a main effect in all of the evaluated dimensions, with smiling models obtaining the highest ratings. Frowning expressions were perceived as being more arousing, clearer, and more intense, but also as more negative than neutral expressions. Stimulus presentation format only influenced the ratings of attractiveness, familiarity, genuineness, and intensity. The attractiveness and familiarity ratings increased with longer exposure times, whereas genuineness decreased. The ratings in the several dimensions were correlated. The subjective norms of facial stimuli presented in this article have potential applications to the work of researchers in several research domains. From our database, researchers may choose the most adequate stimulus presentation format for a particular experiment, select and manipulate the dimensions of interest, and control for the remaining dimensions. The full stimulus set and descriptive results (means, standard deviations, and confidence intervals) for each stimulus per dimension are provided as supplementary material.

  7. Matching novel face and voice identity using static and dynamic facial images.

    PubMed

    Smith, Harriet M J; Dunn, Andrew K; Baguley, Thom; Stacey, Paula C

    2016-04-01

    Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face-voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face-voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face-voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face-voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face-voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face-voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.

  8. Neural responses to facial expressions support the role of the amygdala in processing threat

    PubMed Central

    Sormaz, Mladen; Flack, Tessa; Asghar, Aziz U. R.; Fan, Siyan; Frey, Julia; Manssuer, Luis; Usten, Deniz; Young, Andrew W.; Andrews, Timothy J.

    2014-01-01

    The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results. PMID:24097376

  9. Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials.

    PubMed

    Zhang, Yan; Wei, Bin; Zhao, Peiqiong; Zheng, Minxiao; Zhang, Lili

    2016-06-01

    High rates of agreement in the judgment of facial attractiveness suggest universal principles of beauty. This study investigated gender differences in recognition memory processing of female facial attractiveness. Thirty-four Chinese heterosexual participants (17 females, 17 males) aged 18-24 years (mean age 21.63 ± 1.51 years) participated in the experiment which used event-related potentials (ERPs) based on a study-test paradigm. The behavioral data results showed that both men and women had significantly higher accuracy rates for attractive faces than for unattractive faces, but men reacted faster to unattractive faces. Gender differences on ERPs showed that attractive faces elicited larger early components such as P1, N170, and P2 in men than in women. The results indicated that the effects of recognition bias during memory processing modulated by female facial attractiveness are greater for men than women. Behavioral and ERP evidences indicate that men and women differ in their attentional adhesion to attractive female faces; different mating-related motives may guide the selective processing of attractive men and women. These findings establish a contribution of gender differences on female facial attractiveness during memory processing from an evolutionary perspective.

  10. Variation in the cranial base orientation and facial skeleton in dry skulls sampled from three major populations.

    PubMed

    Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya

    2004-04-01

    The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.

  11. Ability of Children with Learning Disabilities and Children with Autism Spectrum Disorder to Recognize Feelings from Facial Expressions and Body Language

    ERIC Educational Resources Information Center

    Girli, Alev; Dogmaz, Sila

    2018-01-01

    In this study, children with learning disability (LD) were compared with children with autism spectrum disorder (ASD) in terms of identifying emotions from photographs with certain face and body expressions. The sample consisted of a total of 82 children aged 7-19 years living in Izmir in Turkey. A total of 6 separate sets of slides, consisting of…

  12. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  13. Facial First Impressions Across Culture: Data-Driven Modeling of Chinese and British Perceivers' Unconstrained Facial Impressions.

    PubMed

    Sutherland, Clare A M; Liu, Xizi; Zhang, Lingshan; Chu, Yingtung; Oldmeadow, Julian A; Young, Andrew W

    2018-04-01

    People form first impressions from facial appearance rapidly, and these impressions can have considerable social and economic consequences. Three dimensions can explain Western perceivers' impressions of Caucasian faces: approachability, youthful-attractiveness, and dominance. Impressions along these dimensions are theorized to be based on adaptive cues to threat detection or sexual selection, making it likely that they are universal. We tested whether the same dimensions of facial impressions emerge across culture by building data-driven models of first impressions of Asian and Caucasian faces derived from Chinese and British perceivers' unconstrained judgments. We then cross-validated the dimensions with computer-generated average images. We found strong evidence for common approachability and youthful-attractiveness dimensions across perceiver and face race, with some evidence of a third dimension akin to capability. The models explained ~75% of the variance in facial impressions. In general, the findings demonstrate substantial cross-cultural agreement in facial impressions, especially on the most salient dimensions.

  14. Cognitive behavioural therapy attenuates the enhanced early facial stimuli processing in social anxiety disorders: an ERP investigation.

    PubMed

    Cao, Jianqin; Liu, Quanying; Li, Yang; Yang, Jun; Gu, Ruolei; Liang, Jin; Qi, Yanyan; Wu, Haiyan; Liu, Xun

    2017-07-28

    Previous studies of patients with social anxiety have demonstrated abnormal early processing of facial stimuli in social contexts. In other words, patients with social anxiety disorder (SAD) tend to exhibit enhanced early facial processing when compared to healthy controls. Few studies have examined the temporal electrophysiological event-related potential (ERP)-indexed profiles when an individual with SAD compares faces to objects in SAD. Systematic comparisons of ERPs to facial/object stimuli before and after therapy are also lacking. We used a passive visual detection paradigm with upright and inverted faces/objects, which are known to elicit early P1 and N170 components, to study abnormal early face processing and subsequent improvements in this measure in patients with SAD. Seventeen patients with SAD and 17 matched control participants performed a passive visual detection paradigm task while undergoing EEG. The healthy controls were compared to patients with SAD pre-therapy to test the hypothesis that patients with SAD have early hypervigilance to facial cues. We compared patients with SAD before and after therapy to test the hypothesis that the early hypervigilance to facial cues in patients with SAD can be alleviated. Compared to healthy control (HC) participants, patients with SAD had more robust P1-N170 slope but no amplitude effects in response to both upright and inverted faces and objects. Interestingly, we found that patients with SAD had reduced P1 responses to all objects and faces after therapy, but had selectively reduced N170 responses to faces, and especially inverted faces. Interestingly, the slope from P1 to N170 in patients with SAD was flatter post-therapy than pre-therapy. Furthermore, the amplitude of N170 evoked by the facial stimuli was correlated with scores on the interaction anxiousness scale (IAS) after therapy. Our results did not provide electrophysiological support for the early hypervigilance hypothesis in SAD to faces, but confirm that cognitive-behavioural therapy can reduce the early visual processing of faces. These findings have potentially important therapeutic implications in the assessment and treatment of social anxiety. Trial registration HEBDQ2014021.

  15. Interactive effects between gaze direction and facial expression on attentional resources deployment: the task instruction and context matter

    PubMed Central

    Ricciardelli, Paola; Lugli, Luisa; Pellicano, Antonello; Iani, Cristina; Nicoletti, Roberto

    2016-01-01

    In three experiments, we tested whether the amount of attentional resources needed to process a face displaying neutral/angry/fearful facial expressions with direct or averted gaze depends on task instructions, and face presentation. To this end, we used a Rapid Serial Visual Presentation paradigm in which participants in Experiment 1 were first explicitly asked to discriminate whether the expression of a target face (T1) with direct or averted gaze was angry or neutral, and then to judge the orientation of a landscape (T2). Experiment 2 was identical to Experiment 1 except that participants had to discriminate the gender of the face of T1 and fearful faces were also presented randomly inter-mixed within each block of trials. Experiment 3 differed from Experiment 2 only because angry and fearful faces were never presented within the same block. The findings indicated that the presence of the attentional blink (AB) for face stimuli depends on specific combinations of gaze direction and emotional facial expressions and crucially revealed that the contextual factors (e.g., explicit instruction to process the facial expression and the presence of other emotional faces) can modify and even reverse the AB, suggesting a flexible and more contextualized deployment of attentional resources in face processing. PMID:26898473

  16. Social and emotional relevance in face processing: happy faces of future interaction partners enhance the late positive potential

    PubMed Central

    Bublatzky, Florian; Gerdes, Antje B. M.; White, Andrew J.; Riemer, Martin; Alpers, Georg W.

    2014-01-01

    Human face perception is modulated by both emotional valence and social relevance, but their interaction has rarely been examined. Event-related brain potentials (ERP) to happy, neutral, and angry facial expressions with different degrees of social relevance were recorded. To implement a social anticipation task, relevance was manipulated by presenting faces of two specific actors as future interaction partners (socially relevant), whereas two other face actors remained non-relevant. In a further control task all stimuli were presented without specific relevance instructions (passive viewing). Face stimuli of four actors (2 women, from the KDEF) were randomly presented for 1s to 26 participants (16 female). Results showed an augmented N170, early posterior negativity (EPN), and late positive potential (LPP) for emotional in contrast to neutral facial expressions. Of particular interest, face processing varied as a function of experimental tasks. Whereas task effects were observed for P1 and EPN regardless of instructed relevance, LPP amplitudes were modulated by emotional facial expression and relevance manipulation. The LPP was specifically enhanced for happy facial expressions of the anticipated future interaction partners. This underscores that social relevance can impact face processing already at an early stage of visual processing. These findings are discussed within the framework of motivated attention and face processing theories. PMID:25076881

  17. Three-dimensional analysis of facial shape and symmetry in twins using laser surface scanning.

    PubMed

    Djordjevic, J; Jadallah, M; Zhurov, A I; Toma, A M; Richmond, S

    2013-08-01

    Three-dimensional analysis of facial shape and symmetry in twins. Faces of 37 twin pairs [19 monozygotic (MZ) and 18 dizygotic (DZ)] were laser scanned at the age of 15 during a follow-up of the Avon Longitudinal Study of Parents and Children (ALSPAC), South West of England. Facial shape was analysed using two methods: 1) Procrustes analysis of landmark configurations (63 x, y and z coordinates of 21 facial landmarks) and 2) three-dimensional comparisons of facial surfaces within each twin pair. Monozygotic and DZ twins were compared using ellipsoids representing 95% of the variation in landmark configurations and surface-based average faces. Facial symmetry was analysed by superimposing the original and mirror facial images. Both analyses showed greater similarity of facial shape in MZ twins, with lower third being the least similar. Procrustes analysis did not reveal any significant difference in facial landmark configurations of MZ and DZ twins. The average faces of MZ and DZ males were coincident in the forehead, supraorbital and infraorbital ridges, the bridge of the nose and lower lip. In MZ and DZ females, the eyes, supraorbital and infraorbital ridges, philtrum and lower part of the cheeks were coincident. Zygosity did not seem to influence the amount of facial symmetry. Lower facial third was the most asymmetrical. Three-dimensional analyses revealed differences in facial shapes of MZ and DZ twins. The relative contribution of genetic and environmental factors is different for the upper, middle and lower facial thirds. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Attention to emotion modulates fMRI activity in human right superior temporal sulcus.

    PubMed

    Narumoto, J; Okada, T; Sadato, N; Fukui, K; Yonekura, Y

    2001-10-01

    A parallel neural network has been proposed for processing various types of information conveyed by faces including emotion. Using functional magnetic resonance imaging (fMRI), we tested the effect of the explicit attention to the emotional expression of the faces on the neuronal activity of the face-responsive regions. Delayed match to sample procedure was adopted. Subjects were required to match the visually presented pictures with regard to the contour of the face pictures, facial identity, and emotional expressions by valence (happy and fearful expressions) and arousal (fearful and sad expressions). Contour matching of the non-face scrambled pictures was used as a control condition. The face-responsive regions that responded more to faces than to non-face stimuli were the bilateral lateral fusiform gyrus (LFG), the right superior temporal sulcus (STS), and the bilateral intraparietal sulcus (IPS). In these regions, general attention to the face enhanced the activities of the bilateral LFG, the right STS, and the left IPS compared with attention to the contour of the facial image. Selective attention to facial emotion specifically enhanced the activity of the right STS compared with attention to the face per se. The results suggest that the right STS region plays a special role in facial emotion recognition within distributed face-processing systems. This finding may support the notion that the STS is involved in social perception.

  20. Evaluation of appearance transfer and persistence in central face transplantation: a computer simulation analysis.

    PubMed

    Pomahac, Bohdan; Aflaki, Pejman; Nelson, Charles; Balas, Benjamin

    2010-05-01

    Partial facial allotransplantation is an emerging option in reconstruction of central facial defects, providing function and aesthetic appearance. Ethical debate partly stems from uncertainty surrounding identity aspects of the procedure. There is no objective evidence regarding the effect of donors' transplanted facial structures on appearance change of the recipients and its influence on facial recognition of donors and recipients. Full-face frontal view color photographs of 100 volunteers were taken at a distance of 150 cm with a digital camera (Nikon/DX80). Photographs were taken in front of a blue background, and with a neutral facial expression. Using image-editing software (Adobe-Photoshop-CS3), central facial transplantation was performed between participants. Twenty observers performed a familiar 'facial recognition task', to identify 40 post-transplant composite faces presented individually on the screen at a viewing distance of 60 cm, with an exposure time of 5s. Each composite face comprised of a familiar and an unfamiliar face to the observers. Trials were done with and without external facial features (head contour, hair and ears). Two variables were defined: 'Appearance Transfer' refers to transfer of donor's appearance to the recipient. 'Appearance Persistence' deals with the extent of recipient's appearance change post-transplantation. A t-test was run to determine if the rates of Appearance Transfer differed from Appearance Persistence. Average Appearance Transfer rate (2.6%) was significantly lower than Appearance Persistence rate (66%) (P<0.001), indicating that donor's appearance transfer to the recipient is negligible, whereas recipients will be identified the majority of the time. External facial features were important in facial recognition of recipients, evidenced by a significant rise in Appearance Persistence from 19% in the absence of external features to 66% when those features were present (P<0.01). This study may be helpful in the informed consent process of prospective recipients. It is beneficial for education of donors families and is expected to positively affect their decision to consent for facial tissue donation. Copyright (c) 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. Preferential responses in amygdala and insula during presentation of facial contempt and disgust.

    PubMed

    Sambataro, Fabio; Dimalta, Savino; Di Giorgio, Annabella; Taurisano, Paolo; Blasi, Giuseppe; Scarabino, Tommaso; Giannatempo, Giuseppe; Nardini, Marcello; Bertolino, Alessandro

    2006-10-01

    Some authors consider contempt to be a basic emotion while others consider it a variant of disgust. The neural correlates of contempt have not so far been specifically contrasted with disgust. Using functional magnetic resonance imaging (fMRI), we investigated the neural networks involved in the processing of facial contempt and disgust in 24 healthy subjects. Facial recognition of contempt was lower than that of disgust and of neutral faces. The imaging data indicated significant activity in the amygdala and in globus pallidus and putamen during processing of contemptuous faces. Bilateral insula and caudate nuclei and left as well as right inferior frontal gyrus were engaged during processing of disgusted faces. Moreover, direct comparisons of contempt vs. disgust yielded significantly different activations in the amygdala. On the other hand, disgusted faces elicited greater activation than contemptuous faces in the right insula and caudate. Our findings suggest preferential involvement of different neural substrates in the processing of facial emotional expressions of contempt and disgust.

  2. Do you remember your sad face? The roles of negative cognitive style and sad mood.

    PubMed

    Caudek, Corrado; Monni, Alessandra

    2013-01-01

    We studied the effects of negative cognitive style, sad mood, and facial affect on the self-face advantage in a sample of 66 healthy individuals (mean age 26.5 years, range 19-47 years). The sample was subdivided into four groups according to inferential style and responsivity to sad mood induction. Following a sad mood induction, we examined the effect on working memory of an incidental association between facial affect, facial identity, and head-pose orientation. Overall, head-pose recognition was more accurate for the self-face than for nonself face (self-face advantage, SFA). However, participants high in negative cognitive style who experienced higher levels of sadness displayed a stronger SFA for sad expressions than happy expressions. The remaining participants displayed an opposite bias (a stronger SFA for happy expressions than sad expressions), or no bias. These findings highlight the importance of trait-vulnerability status in the working memory biases related to emotional facial expressions.

  3. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  4. Developmental Change in Infant Categorization: The Perception of Correlations among Facial Features.

    ERIC Educational Resources Information Center

    Younger, Barbara

    1992-01-01

    Tested 7 and 10 month olds for perception of correlations among facial features. After habituation to faces displaying a pattern of correlation, 10 month olds generalized to a novel face that preserved the pattern of correlation but showed increased attention to a novel face that violated the pattern. (BC)

  5. Sex Differences in Facial Scanning: Similarities and Dissimilarities between Infants and Adults

    ERIC Educational Resources Information Center

    Rennels, Jennifer L.; Cummings, Andrew J.

    2013-01-01

    When face processing studies find sex differences, male infants appear better at face recognition than female infants, whereas female adults appear better at face recognition than male adults. Both female infants and adults, however, discriminate emotional expressions better than males. To investigate if sex and age differences in facial scanning…

  6. Surface facial modelling and allometry in relation to sexual dimorphism.

    PubMed

    Velemínská, J; Bigoni, L; Krajíček, V; Borský, J; Šmahelová, D; Cagáňová, V; Peterka, M

    2012-04-01

    Sexual dimorphism is responsible for a substantial part of human facial variability, the study of which is essential for many scientific fields ranging from evolution to special biomedical topics. Our aim was to analyse the relationship between size variability and shape facial variability of sexual traits in the young adult Central European population and to construct average surface models of adult males and females. The method of geometric morphometrics allowed not only the identification of dimorphic traits, but also the evaluation of static allometry and the visualisation of sexual facial differences. Facial variability in the studied sample was characterised by a strong relationship between facial size and shape of sexual dimorphic traits. Large size of face was associated with facial elongation and vice versa. Regarding shape sexual dimorphic traits, a wide, vaulted and high forehead in combination with a narrow and gracile lower face were typical for females. Variability in shape dimorphic traits was smaller in females compared to males. For female classification, shape sexual dimorphic traits are more important, while for males the stronger association is with face size. Males generally had a closer inter-orbital distance and a deeper position of the eyes in relation to the facial plane, a larger and wider straight nose and nostrils, and more massive lower face. Using pseudo-colour maps to provide a detailed schematic representation of the geometrical differences between the sexes, we attempted to clarify the reasons underlying the development of such differences. Copyright © 2012 Elsevier GmbH. All rights reserved.

  7. The face of fear and anger: Facial width-to-height ratio biases recognition of angry and fearful expressions.

    PubMed

    Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt

    2018-04-01

    The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features

    ERIC Educational Resources Information Center

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-01-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…

  9. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  10. Memory for faces: the effect of facial appearance and the context in which the face is encountered.

    PubMed

    Mattarozzi, Katia; Todorov, Alexander; Codispoti, Maurizio

    2015-03-01

    We investigated the effects of appearance of emotionally neutral faces and the context in which the faces are encountered on incidental face memory. To approximate real-life situations as closely as possible, faces were embedded in a newspaper article, with a headline that specified an action performed by the person pictured. We found that facial appearance affected memory so that faces perceived as trustworthy or untrustworthy were remembered better than neutral ones. Furthermore, the memory of untrustworthy faces was slightly better than that of trustworthy faces. The emotional context of encoding affected the details of face memory. Faces encountered in a neutral context were more likely to be recognized as only familiar. In contrast, emotionally relevant contexts of encoding, whether pleasant or unpleasant, increased the likelihood of remembering semantic and even episodic details associated with faces. These findings suggest that facial appearance (i.e., perceived trustworthiness) affects face memory. Moreover, the findings support prior evidence that the engagement of emotion processing during memory encoding increases the likelihood that events are not only recognized but also remembered.

  11. The association between the psychological status and the severity of facial deformity in orthognathic patients.

    PubMed

    Kovalenko, Aleksandra; Slabkovskaya, Anna; Drobysheva, Nailya; Persin, Leonid; Drobyshev, Alexey; Maddalone, Marcello

    2012-05-01

    To evaluate the psychological status and correlate it with the severity of facial deformities of patients with skeletal malocclusions before orthognathic treatment. A total of 96 patients aged 15 to 47 with skeletal malocclusions were examined before orthognathic treatment was provided. A photographic analysis was carried out to determine the severity of facial deformity according to the Facial Aesthetic Index (FA1). All patients were divided into three groups according to the FAI score: light (0 to 9), moderate (10 to 19), and severe (>19) facial deformities. Thirty subjects aged 17 to 39 with normal occlusion and attractive harmonious faces without previous orthodontic and/or surgical history were taken as controls. Psychological testing of controls and patients in the study group was performed before orthognathic treatment was provided. Psychological testing showed no statistically significant differences among groups with light and moderate facial deformity and subjects in the control group. Significant differences were encountered among patients with severe facial deformities compared with controls in a series of personality traits, including introversion, neuroticism, trait anxiety, dependency, unsociability, and leadership. Orthognathic patients with different degrees of facial deformity have different psychological profiles. Patients with light and moderate facial deformity have no significant psychological problems. Patients with severe facial deformity show a significantly higher prevalence of emotional instability, introversion, anxiety, and unsociability. Such psychological profiles make orthognathic patients with severe facial deformity prone to psychological distress, depression, and adverse psychological reactions.

  12. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    PubMed

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  13. Facial Mimicry and Emotion Consistency: Influences of Memory and Context.

    PubMed

    Kirkham, Alexander J; Hayes, Amy E; Pawling, Ralph; Tipper, Steven P

    2015-01-01

    This study investigates whether mimicry of facial emotions is a stable response or can instead be modulated and influenced by memory of the context in which the emotion was initially observed, and therefore the meaning of the expression. The study manipulated emotion consistency implicitly, where a face expressing smiles or frowns was irrelevant and to be ignored while participants categorised target scenes. Some face identities always expressed emotions consistent with the scene (e.g., smiling with a positive scene), whilst others were always inconsistent (e.g., frowning with a positive scene). During this implicit learning of face identity and emotion consistency there was evidence for encoding of face-scene emotion consistency, with slower RTs, a reduction in trust, and inhibited facial EMG for faces expressing incompatible emotions. However, in a later task where the faces were subsequently viewed expressing emotions with no additional context, there was no evidence for retrieval of prior emotion consistency, as mimicry of emotion was similar for consistent and inconsistent individuals. We conclude that facial mimicry can be influenced by current emotion context, but there is little evidence of learning, as subsequent mimicry of emotionally consistent and inconsistent faces is similar.

  14. Recognizing Dynamic Faces in Malaysian Chinese Participants.

    PubMed

    Tan, Chrystalle B Y; Sheppard, Elizabeth; Stephen, Ian D

    2016-03-01

    High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies. © The Author(s) 2015.

  15. Apparent height and body mass index influence perceived leadership ability in three-dimensional faces.

    PubMed

    Re, Daniel E; Dzhelyova, Milena; Holzleitner, Iris J; Tigue, Cara C; Feinberg, David R; Perrett, David I

    2012-01-01

    Facial appearance has a well-documented effect on perceived leadership ability. Face judgments of leadership ability predict political election outcomes across the world, and similar judgments of business CEOs predict company profits. Body height is also associated with leadership ability, with taller people attaining positions of leadership more than their shorter counterparts in both politics and in the corporate world. Previous studies have found some face characteristics that are associated with leadership judgments, however there have been no studies with three-dimensional faces. We assessed which facial characteristics drive leadership judgments in three-dimensional faces. We found a perceptual relationship between height and leadership ability. We also found that facial maturity correlated with leadership judgments, and that faces of people with an unhealthily high body mass index received lower leadership ratings. We conclude that face attributes associated with body size and maturity alter leadership perception, and may influence real-world democratic leadership selection.

  16. Spontaneous Gender Categorization in Masking and Priming Studies: Key for Distinguishing Jane from John Doe but Not Madonna from Sinatra

    PubMed Central

    Habibi, Ruth; Khurana, Beena

    2012-01-01

    Facial recognition is key to social interaction, however with unfamiliar faces only generic information, in the form of facial stereotypes such as gender and age is available. Therefore is generic information more prominent in unfamiliar versus familiar face processing? In order to address the question we tapped into two relatively disparate stages of face processing. At the early stages of encoding, we employed perceptual masking to reveal that only perception of unfamiliar face targets is affected by the gender of the facial masks. At the semantic end; using a priming paradigm, we found that while to-be-ignored unfamiliar faces prime lexical decisions to gender congruent stereotypic words, familiar faces do not. Our findings indicate that gender is a more salient dimension in unfamiliar relative to familiar face processing, both in early perceptual stages as well as later semantic stages of person construal. PMID:22389697

  17. Interactions between facial emotion and identity in face processing: evidence based on redundancy gains.

    PubMed

    Yankouskaya, Alla; Booth, David A; Humphreys, Glyn

    2012-11-01

    Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.

  18. Enhancing facial aesthetics with muscle retraining exercises-a review.

    PubMed

    D'souza, Raina; Kini, Ashwini; D'souza, Henston; Shetty, Nitin; Shetty, Omkar

    2014-08-01

    Facial attractiveness plays a key role in social interaction. 'Smile' is not only a single category of facial behaviour, but also the emotion of frank joy which is expressed on the face by the combined contraction of the muscles involved. When a patient visits the dental clinic for aesthetic reasons, the dentist considers not only the chief complaint but also the overall harmony of the face. This article describes muscle retraining exercises to achieve control over facial movements and improve facial appearance which may be considered following any type of dental rehabilitation. Muscle conditioning, training and strengthening through daily exercises will help to counter balance the aging effects.

  19. Minimal Nasolabial Incision Technique for Nasolabial Fold Modification in Patients With Facial Paralysis.

    PubMed

    Faris, Callum; Heiser, Alyssa; Jowett, Nate; Hadlock, Tessa

    2018-03-01

    Creation of symmetrical nasolabial folds (NLFs) is important in the management of the paralyzed face. Established techniques use a linear incision in the NLF, and technical refinements now allow the linear incision to be omitted. This retrospective case series was conducted in a tertiary care setting from February 2, 2017, to June 7, 2017. Participants were all patients (N = 21) with peripheral facial paralysis who underwent NLF modification that used the minimal nasolabial incision technique at the Massachusetts Eye and Ear Infirmary Facial Nerve Center from February 1, 2015, through August 31, 2016. Patient-reported outcome measures using the validated, quality-of-life Facial Clinimetric Evaluation (FaCE) Scale; clinician-reported facial function outcomes using a validated electronic clinician-graded facial paralysis assessment (eFACE); layperson assessment of the overall aesthetic outcome of the NLF; and expert-clinician scar assessment of the NLF. Of the 21 patients who underwent NLF modification that used the minimal nasolabial incision technique, 9 patients (43%) were female and 12 (57%) were male. The mean age was 41 (range, 9-90) years; 17 patients (81%) were adults (≥18 years) and 4 (19%) were children (<18 years). Overall, significant improvements were observed after NLF modification in all outcome measures as graded by both clinicians and patients. The mean (SD) scores for total eFACE were 60.7 (14.9) before the operation and 77.2 (8.9) after the operation (mean difference, 16.5 [95% CI, 8.5-24.2]; P < .001). The mean (SD) static eFACE scores were 61.4 (20.6) before the operation and 82.7 (12.4) after the operation (mean difference, 21.3 [95% CI, 10.7-31.9]; P < .001). The mean (SD) FaCE quality-of-life scores were 51.3 (20.1) before the operation and 70.3 (12.6) after the operation (mean difference, 19.0 [95% CI, 6.5-31.6]; P  =  .001). The layperson self-assessment of the overall aesthetic outcome of the NLF modification was higher among the group who had the minimal nasolabial incision than it was for the group who had a historical nasolabial incision (mean [SD], 68.17 [13.59] vs 56.28 [13.60]; mean difference, 11.89 [95% CI, 3.81-19.97]; P < .001). Similarly, the expert-clinician scar assessment of the NLF modification was higher for the group who had the minimal nasolabial incision than it was for the group who had a historical nasolabial incision (3.78 [0.91] vs 2.98 [0.81]; mean difference, 0.80 [95% CI, 0.29-1.32]; P  =  .007). The minimal nasolabial incision technique for NLF modification is effective in rehabilitating the NLF in facial paralysis without adding a long linear scar to the central midface. 4.

  20. The influence of context on distinct facial expressions of disgust.

    PubMed

    Reschke, Peter J; Walle, Eric A; Knothe, Jennifer M; Lopez, Lukas D

    2018-06-11

    Face perception is susceptible to contextual influence and perceived physical similarities between emotion cues. However, studies often use structurally homogeneous facial expressions, making it difficult to explore how within-emotion variability in facial configuration affects emotion perception. This study examined the influence of context on the emotional perception of categorically identical, yet physically distinct, facial expressions of disgust. Participants categorized two perceptually distinct disgust facial expressions, "closed" (i.e., scrunched nose, closed mouth) and "open" (i.e., scrunched nose, open mouth, protruding tongue), that were embedded in contexts comprising emotion postures and scenes. Results demonstrated that the effect of nonfacial elements was significantly stronger for "open" disgust facial expressions than "closed" disgust facial expressions. These findings provide support that physical similarity within discrete categories of facial expressions is mutable and plays an important role in affective face perception. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  2. Soft-tissue facial characteristics of attractive Chinese men compared to normal men

    PubMed Central

    Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing

    2015-01-01

    Objective: To compare the facial characteristics of attractive Chinese men with those of reference men. Materials and Methods: The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 “attractive” men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. Results: When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Conclusions: Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces. PMID:26221357

  3. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    PubMed

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  4. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  5. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  6. Faces in Context: A Review and Systematization of Contextual Influences on Affective Face Processing

    PubMed Central

    Wieser, Matthias J.; Brosch, Tobias

    2012-01-01

    Facial expressions are of eminent importance for social interaction as they convey information about other individuals’ emotions and social intentions. According to the predominant “basic emotion” approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual’s face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research. PMID:23130011

  7. Faces in context: a review and systematization of contextual influences on affective face processing.

    PubMed

    Wieser, Matthias J; Brosch, Tobias

    2012-01-01

    Facial expressions are of eminent importance for social interaction as they convey information about other individuals' emotions and social intentions. According to the predominant "basic emotion" approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual's face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.

  8. Characterization of small-to-medium head-and-face dimensions for developing respirator fit test panels and evaluating fit of filtering facepiece respirators with different faceseal design

    PubMed Central

    Lin, Yi-Chun

    2017-01-01

    A respirator fit test panel (RFTP) with facial size distribution representative of intended users is essential to the evaluation of respirator fit for new models of respirators. In this study an anthropometric survey was conducted among youths representing respirator users in mid-Taiwan to characterize head-and-face dimensions key to RFTPs for application to small-to-medium facial features. The participants were fit-tested for three N95 masks of different facepiece design and the results compared to facial size distribution specified in the RFTPs of bivariate and principal component analysis design developed in this study to realize the influence of facial characteristics to respirator fit in relation to facepiece design. Nineteen dimensions were measured for 206 participants. In fit testing the qualitative fit test (QLFT) procedures prescribed by the U.S. Occupational Safety and Health Administration were adopted. As the results show, the bizygomatic breadth of the male and female participants were 90.1 and 90.8% of their counterparts reported for the U.S. youths (P < 0.001), respectively. Compared to the bivariate distribution, the PCA design better accommodated variation in facial contours among different respirator user groups or populations, with the RFTPs reported in this study and from literature consistently covering over 92% of the participants. Overall, the facial fit of filtering facepieces increased with increasing facial dimensions. The total percentages of the tests wherein the final maneuver being completed was “Moving head up-and-down”, “Talking” or “Bending over” in bivariate and PCA RFTPs were 13.3–61.9% and 22.9–52.8%, respectively. The respirators with a three-panel flat fold structured in the facepiece provided greater fit, particularly when the users moved heads. When the facial size distribution in a bivariate RFTP did not sufficiently represent petite facial size, the fit testing was inclined to overestimate the general fit, thus for small-to-medium facial dimensions a distinct RFTP should be considered. PMID:29176833

  9. Characterization of small-to-medium head-and-face dimensions for developing respirator fit test panels and evaluating fit of filtering facepiece respirators with different faceseal design.

    PubMed

    Lin, Yi-Chun; Chen, Chen-Peng

    2017-01-01

    A respirator fit test panel (RFTP) with facial size distribution representative of intended users is essential to the evaluation of respirator fit for new models of respirators. In this study an anthropometric survey was conducted among youths representing respirator users in mid-Taiwan to characterize head-and-face dimensions key to RFTPs for application to small-to-medium facial features. The participants were fit-tested for three N95 masks of different facepiece design and the results compared to facial size distribution specified in the RFTPs of bivariate and principal component analysis design developed in this study to realize the influence of facial characteristics to respirator fit in relation to facepiece design. Nineteen dimensions were measured for 206 participants. In fit testing the qualitative fit test (QLFT) procedures prescribed by the U.S. Occupational Safety and Health Administration were adopted. As the results show, the bizygomatic breadth of the male and female participants were 90.1 and 90.8% of their counterparts reported for the U.S. youths (P < 0.001), respectively. Compared to the bivariate distribution, the PCA design better accommodated variation in facial contours among different respirator user groups or populations, with the RFTPs reported in this study and from literature consistently covering over 92% of the participants. Overall, the facial fit of filtering facepieces increased with increasing facial dimensions. The total percentages of the tests wherein the final maneuver being completed was "Moving head up-and-down", "Talking" or "Bending over" in bivariate and PCA RFTPs were 13.3-61.9% and 22.9-52.8%, respectively. The respirators with a three-panel flat fold structured in the facepiece provided greater fit, particularly when the users moved heads. When the facial size distribution in a bivariate RFTP did not sufficiently represent petite facial size, the fit testing was inclined to overestimate the general fit, thus for small-to-medium facial dimensions a distinct RFTP should be considered.

  10. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  11. The role of the posed smile in overall facial esthetics.

    PubMed

    Havens, David C; McNamara, James A; Sigler, Lauren M; Baccetti, Tiziano

    2010-03-01

    To evaluate the role of the posed smile in overall facial esthetics, as determined by laypersons and orthodontists. Twenty orthodontists and 20 lay evaluators were asked to perform six Q-sorts on different photographs of 48 white female subjects. The six Q-sorts consisted of three different photographs for each of two time points (pre- and posttreatment), as follows: (1) smile-only, (2) face without the smile, and (3) face with the smile. The evaluators determined a split-line for attractive and unattractive images at the end of each Q-sort. The proportions of attractive patients were compared across Q-sorts using a Wilcoxon signed-rank test for paired data. The evaluators also ranked nine facial/dental characteristics at the completion of the six Q-sorts. Evaluators found the pretreatment face without the smile to be significantly more attractive than the face with the smile or the smile-only photographs. Dissimilar results were seen posttreatment; there was not a significant difference between the three posttreatment photographs. The two panels agreed on the proportion of "attractive" subjects but differed on the attractiveness level of each individual subject. The presence of a malocclusion has a negative impact on facial attractiveness. Orthodontic correction of a malocclusion affects overall facial esthetics positively. Laypeople and orthodontists agree on what is attractive. Overall facial harmony is the most important characteristic used in deciding facial attractiveness.

  12. [Motor nerves of the face. Surgical and radiologic anatomy of facial paralysis and their surgical repair].

    PubMed

    Vacher, C; Cyna-Gorse, F

    2015-10-01

    Motor innervation of the face depends on the facial nerve for the mobility of the face, on the mandibular nerve, third branch of the trigeminal nerve, which gives the motor innervation of the masticator muscles, and the hypoglossal nerve for the tongue. In case of facial paralysis, the most common palliative surgical techniques are the lengthening temporalis myoplasty (the temporal is innervated by the mandibular nerve) and the hypoglossal-facial anastomosis. The aim of this work is to describe the surgical anatomy of these three nerves and the radiologic anatomy of the facial nerve inside the temporal bone. Then the facial nerve penetrates inside the parotid gland giving a plexus. Four branches of the facial nerve leave the parotid gland: they are called temporal, zygomatic, buccal and marginal which give innervation to the cutaneous muscles of the face. Mandibular nerve gives three branches to the temporal muscles: the anterior, intermediate and posterior deep temporal nerves which penetrate inside the deep aspect of the temporal muscle in front of the infratemporal line. The hypoglossal nerve is only the motor nerve to the tongue. The ansa cervicalis, which is coming from the superficial cervical plexus and joins the hypoglossal nerve in the submandibular area is giving the motor innervation to subhyoid muscles and to the geniohyoid muscle. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  13. Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M

    2014-09-01

    Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.

  14. Curvilinear relationship between phonological working memory load and social-emotional modulation

    PubMed Central

    Mano, Quintino R.; Brown, Gregory G.; Bolden, Khalima; Aupperle, Robin; Sullivan, Sarah; Paulus, Martin P.; Stein, Murray B.

    2015-01-01

    Accumulating evidence suggests that working memory load is an important factor for the interplay between cognitive and facial-affective processing. However, it is unclear how distraction caused by perception of faces interacts with load-related performance. We developed a modified version of the delayed match-to-sample task wherein task-irrelevant facial distracters were presented early in the rehearsal of pseudoword memoranda that varied incrementally in load size (1-syllable, 2-syllables, or 3-syllables). Facial distracters displayed happy, sad, or neutral expressions in Experiment 1 (N=60) and happy, fearful, or neutral expressions in Experiment 2 (N=29). Facial distracters significantly disrupted task performance in the intermediate load condition (2-syllable) but not in the low or high load conditions (1- and 3-syllables, respectively), an interaction replicated and generalised in Experiment 2. All facial distracters disrupted working memory in the intermediate load condition irrespective of valence, suggesting a primary and general effect of distraction caused by faces. However, sad and fearful faces tended to be less disruptive than happy faces, suggesting a secondary and specific valence effect. Working memory appears to be most vulnerable to social-emotional information at intermediate loads. At low loads, spare capacity is capable of accommodating the combinatorial load (1-syllable plus facial distracter), whereas high loads maximised capacity and deprived facial stimuli from occupying working memory slots to cause disruption. PMID:22928750

  15. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  16. Parametric modulation of neural activity by emotion in youth with bipolar disorder, youth with severe mood dysregulation, and healthy volunteers.

    PubMed

    Thomas, Laura A; Brotman, Melissa A; Muhrer, Eli J; Rosen, Brooke H; Bones, Brian L; Reynolds, Richard C; Deveney, Christen M; Pine, Daniel S; Leibenluft, Ellen

    2012-12-01

    CONTEXT Youth with bipolar disorder (BD) and those with severe, nonepisodic irritability (severe mood dysregulation [SMD]) exhibit amygdala dysfunction during facial emotion processing. However, studies have not compared such patients with each other and with comparison individuals in neural responsiveness to subtle changes in facial emotion; the ability to process such changes is important for social cognition. To evaluate this, we used a novel, parametrically designed faces paradigm. OBJECTIVE To compare activation in the amygdala and across the brain in BD patients, SMD patients, and healthy volunteers (HVs). DESIGN Case-control study. SETTING Government research institute. PARTICIPANTS Fifty-seven youths (19 BD, 15 SMD, and 23 HVs). MAIN OUTCOME MEASURE Blood oxygenation level-dependent data. Neutral faces were morphed with angry and happy faces in 25% intervals; static facial stimuli appeared for 3000 milliseconds. Participants performed hostility or nonemotional facial feature (ie, nose width) ratings. The slope of blood oxygenation level-dependent activity was calculated across neutral-to-angry and neutral-to-happy facial stimuli. RESULTS In HVs, but not BD or SMD participants, there was a positive association between left amygdala activity and anger on the face. In the neutral-to-happy whole-brain analysis, BD and SMD participants modulated parietal, temporal, and medial-frontal areas differently from each other and from that in HVs; with increasing facial happiness, SMD patients demonstrated increased, and BD patients decreased, activity in the parietal, temporal, and frontal regions. CONCLUSIONS Youth with BD or SMD differ from HVs in modulation of amygdala activity in response to small changes in facial anger displays. In contrast, individuals with BD or SMD show distinct perturbations in regions mediating attention and face processing in association with changes in the emotional intensity of facial happiness displays. These findings demonstrate similarities and differences in the neural correlates of facial emotion processing in BD and SMD, suggesting that these distinct clinical presentations may reflect differing dysfunctions along a mood disorders spectrum.

  17. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    PubMed

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-09-01

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  18. Institutional review board-based recommendations for medical institutions pursuing protocol approval for facial transplantation.

    PubMed

    Siemionow, Maria Z; Gordon, Chad R

    2010-10-01

    Preliminary outcomes from the previous nine face transplants performed since 2005 have been encouraging and have therefore led to a rise in the number of medical centers interested in establishing face transplant programs worldwide. However, until now, very little literature has been published providing surgeons the necessary insight on how to (1) prepare a protocol for institutional review board approval and (2) establish a face transplant program. The authors' face transplant team's experience with the institutional review board at the Cleveland Clinic, beginning in 2002, was critically reviewed in a detailed, retrospective manner. The purpose was to identify and define certain criteria necessary for both the institutional review board approval process and face transplant program establishment. In 2002, unprecedented efforts from within the authors' plastic surgery department led to the world's first institutional review board approval for face transplantation, in 2004. As a result, 4 years later, the authors' face transplant team performed the nation's first successful near-total face and maxilla transplant. Every surgical department hoping to establish a face transplant program must realize that this endeavor requires both tremendous financial and long-term commitments by its medical institution. These transplants should be performed only within university-based medical centers capable of orchestrating a specialized, talented, multidisciplinary team. More importantly, facial composite tissue allotransplantation possesses an unmatched level of complexity and therefore requires most centers to prepare a carefully detailed protocol using these institutional review board-based guidelines.

  19. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    PubMed

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Face recognition using facial expression: a novel approach

    NASA Astrophysics Data System (ADS)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  1. Research on facial expression simulation based on depth image

    NASA Astrophysics Data System (ADS)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  2. Face-ism and Objectification in Mainstream and LGBT Magazines.

    PubMed

    Cheek, Nathan N

    2016-01-01

    In visual media, men are often shown with more facial prominence than women, a manifestation of sexism that has been labeled face-ism. The present research extended the study of facial prominence and gender representation in media to include magazines aimed at lesbian, gay, bisexual, and transgender (LGBT) audiences for the first time, and also examined whether overall gender differences in facial prominence can still be found in mainstream magazines. Face-ism emerged in Newsweek, but not in Time, The Advocate, or Out. Although there were no overall differences in facial prominence between mainstream and LGBT magazines, there were differences in the facial prominence of men and women among the four magazines included in the present study. These results suggest that face-ism is still a problem, but that it may be restricted to certain magazines. Furthermore, future research may benefit from considering individual magazine titles rather than broader categories of magazines, given that the present study found few similarities between different magazines in the same media category--indeed, Out and Time were more similar to each other than they were to the other magazine in their respective categories.

  3. Intranasal oxytocin increases facial expressivity, but not ratings of trustworthiness, in patients with schizophrenia and healthy controls.

    PubMed

    Woolley, J D; Chuang, B; Fussell, C; Scherer, S; Biagianti, B; Fulford, D; Mathalon, D H; Vinogradov, S

    2017-05-01

    Blunted facial affect is a common negative symptom of schizophrenia. Additionally, assessing the trustworthiness of faces is a social cognitive ability that is impaired in schizophrenia. Currently available pharmacological agents are ineffective at improving either of these symptoms, despite their clinical significance. The hypothalamic neuropeptide oxytocin has multiple prosocial effects when administered intranasally to healthy individuals and shows promise in decreasing negative symptoms and enhancing social cognition in schizophrenia. Although two small studies have investigated oxytocin's effects on ratings of facial trustworthiness in schizophrenia, its effects on facial expressivity have not been investigated in any population. We investigated the effects of oxytocin on facial emotional expressivity while participants performed a facial trustworthiness rating task in 33 individuals with schizophrenia and 35 age-matched healthy controls using a double-blind, placebo-controlled, cross-over design. Participants rated the trustworthiness of presented faces interspersed with emotionally evocative photographs while being video-recorded. Participants' facial expressivity in these videos was quantified by blind raters using a well-validated manualized approach (i.e. the Facial Expression Coding System; FACES). While oxytocin administration did not affect ratings of facial trustworthiness, it significantly increased facial expressivity in individuals with schizophrenia (Z = -2.33, p = 0.02) and at trend level in healthy controls (Z = -1.87, p = 0.06). These results demonstrate that oxytocin administration can increase facial expressivity in response to emotional stimuli and suggest that oxytocin may have the potential to serve as a treatment for blunted facial affect in schizophrenia.

  4. Individual differences and the effect of face configuration information in the McGurk effect.

    PubMed

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2018-04-01

    The McGurk effect, which denotes the influence of visual information on audiovisual speech perception, is less frequently observed in individuals with autism spectrum disorder (ASD) compared to those without it; the reason for this remains unclear. Several studies have suggested that facial configuration context might play a role in this difference. More specifically, people with ASD show a local processing bias for faces-that is, they process global face information to a lesser extent. This study examined the role of facial configuration context in the McGurk effect in 46 healthy students. Adopting an analogue approach using the Autism-Spectrum Quotient (AQ), we sought to determine whether this facial configuration context is crucial to previously observed reductions in the McGurk effect in people with ASD. Lip-reading and audiovisual syllable identification tasks were assessed via presentation of upright normal, inverted normal, upright Thatcher-type, and inverted Thatcher-type faces. When the Thatcher-type face was presented, perceivers were found to be sensitive to the misoriented facial characteristics, causing them to perceive a weaker McGurk effect than when the normal face was presented (this is known as the McThatcher effect). Additionally, the McGurk effect was weaker in individuals with high AQ scores than in those with low AQ scores in the incongruent audiovisual condition, regardless of their ability to read lips or process facial configuration contexts. Our findings, therefore, do not support the assumption that individuals with ASD show a weaker McGurk effect due to a difficulty in processing facial configuration context.

  5. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex

    PubMed Central

    Morin, Elyse L.; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G.; Bell, Andrew H.

    2015-01-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. PMID:24836688

  6. [Treatment goals in FACE philosophy].

    PubMed

    Martin, Domingo; Maté, Amaia; Zabalegui, Paula; Valenzuela, Jaime

    2017-03-01

    The FACE philosophy is characterized by clearly defined treatment goals: facial esthetics, dental esthetics, periodontal health, functional occlusion, neuromuscular mechanism and joint function. The purpose is to establish ideal occlusion with good facial esthetics and an orthopedic stable joint position. The authors present all the concepts of FACE philosophy and illustrate them through one case report. Taking into account all the FACE philosophy concepts increases diagnostic ability and improves the quality and stability of treatment outcomes. The goal of this philosophy is to harmonize the facial profile, tooth alignment, periodontium, functional occlusion, neuromuscular mechanism and joint function. The evaluation and treatment approach to vertical problems are unique to the philosophy. © EDP Sciences, SFODF, 2017.

  7. Colour detection thresholds in faces and colour patches.

    PubMed

    Tan, Kok Wei; Stephen, Ian D

    2013-01-01

    Human facial skin colour reflects individuals' underlying health (Stephen et al 2011 Evolution & Human Behavior 32 216-227); and enhanced facial skin CIELab b* (yellowness), a* (redness), and L* (lightness) are perceived as healthy (also Stephen et al 2009a International Journal of Primatology 30 845-857). Here, we examine Malaysian Chinese participants' detection thresholds for CIELab L* (lightness), a* (redness), and b* (yellowness) colour changes in Asian, African, and Caucasian faces and skin coloured patches. Twelve face photos and three skin coloured patches were transformed to produce four pairs of images of each individual face and colour patch with different amounts of red, yellow, or lightness, from very subtle (deltaE = 1.2) to quite large differences (deltaE = 9.6). Participants were asked to decide which of sequentially displayed, paired same-face images or colour patches were lighter, redder, or yellower. Changes in facial redness, followed by changes in yellowness, were more easily discriminated than changes in luminance. However, visual sensitivity was not greater for redness and yellowness in nonface stimuli, suggesting red facial skin colour special salience. Participants were also significantly better at recognizing colour differences in own-race (Asian) and Caucasian faces than in African faces, suggesting the existence of cross-race effect in discriminating facial colours. Humans' colour vision may have been selected for skin colour signalling (Changizi et al 2006 Biology Letters 2 217-221), enabling individuals to perceive subtle changes in skin colour, reflecting health and emotional status.

  8. Children's Perceptions of and Beliefs about Facial Maturity

    ERIC Educational Resources Information Center

    Thomas, Gross F.

    2004-01-01

    The author studied children's and young adult's perceptions of facial age and beliefs about the sociability, cognitive ability, and physical fitness of adult faces. From pairs of photographs of adult faces, participants (4-6 years old, 8-10 years old, 13-16 years old, and 19-23 years old) selected the one face that appeared younger, older, better…

  9. Correlated Preferences for Male Facial Masculinity and Partner Traits in Gay and Bisexual Men in China.

    PubMed

    Zheng, Lijun; Zheng, Yong

    2015-07-01

    Previous studies have documented the correlation between preferences for male facial masculinity and perceived masculinity: women who rate their male partner as more masculine tend to prefer more masculine faces. Men's self-rated masculinity predicts their female partner's preference for masculinity. This study examined the association between other trait preferences and preference for male facial masculinity among 556 gay and bisexual men across multiple cities in China. Participants were asked to choose the three most important traits in a romantic partner from a list of 23 traits. Each participant was then asked to choose a preferred face in each of 10 pairs of male faces presented sequentially, with each pair consisting of a masculinized and feminized version of the same base face. The results indicated that preferences for health and status-related traits were correlated with preferences for male facial masculinity in gay and bisexual men in China; individuals who were more health- or status-oriented in their preferences for a romantic partner preferred more masculine male faces than individuals with lower levels of these orientations. The findings have implications for the correlated preferences for facial masculinity and health- and status-related traits and may be related to perceived health and dominance/aggression of masculine faces based on a sample of non-Western gay and bisexual men.

  10. Appearance is a function of the face.

    PubMed

    Borah, Gregory L; Rankin, Marlene K

    2010-03-01

    Increasingly, third-party insurers deny coverage to patients with posttraumatic and congenital facial deformities because these are not seen as "functional." Recent facial transplants have demonstrated that severely deformed patients are willing to undergo potentially life-threatening surgery in search of a normal physiognomy. Scant quantitative research exists that objectively documents appearance as a primary "function" of the face. This study was designed to establish a population-based definition of the functions of the human face, rank importance of the face among various anatomical areas, and determine the risk value the average person places on a normal appearance. Voluntary adult subjects (n = 210) in three states aged 18 to 75 years were recruited using a quota sampling technique. Subjects completed study questionnaires of demography and bias using the Gamble Chance of Death Questionnaire and the Rosenberg Self-Esteem Scale. The face ranked as the most important anatomical area for functional reconstruction. Appearance was the fifth most important function of the face, after breathing, sight, speech, and eating. Normal facial appearance was rated as very important for one to be a functioning member of American society (p = 0.01) by 49 percent. One in seven subjects (13 percent) would accept a 30 to 45 percent risk of death to obtain a "normal" face. Normal appearance is a primary function of the face, based on a large, culturally diverse population sample across the lifespan. Normal appearance ranks above smell and expression as a function. Restoration of facial appearance is ranked the most important anatomical area for repair. Normal facial appearance is very important for one to be a functional member of American society.

  11. Pick on someone your own size: the detection of threatening facial expressions posed by both child and adult models.

    PubMed

    LoBue, Vanessa; Matthews, Kaleigh; Harvey, Teresa; Thrasher, Cat

    2014-02-01

    For decades, researchers have documented a bias for the rapid detection of angry faces in adult, child, and even infant participants. However, despite the age of the participant, the facial stimuli used in all of these experiments were schematic drawings or photographs of adult faces. The current research is the first to examine the detection of both child and adult emotional facial expressions. In our study, 3- to 5-year-old children and adults detected angry, sad, and happy faces among neutral distracters. The depicted faces were of adults or of other children. As in previous work, children detected angry faces more quickly than happy and neutral faces overall, and they tended to detect the faces of other children more quickly than the faces of adults. Adults also detected angry faces more quickly than happy and sad faces even when the faces depicted child models. The results are discussed in terms of theoretical implications for the development of a bias for threat in detection. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Adaptation effects to attractiveness of face photographs and art portraits are domain-specific

    PubMed Central

    Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph

    2013-01-01

    We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690

  13. Training facial expression production in children on the autism spectrum.

    PubMed

    Gordon, Iris; Pierce, Matthew D; Bartlett, Marian S; Tanaka, James W

    2014-10-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer recognition system that analyzes the child's facial expression in real time. Before and after playing the Angry and Happy versions of FaceMaze, children posed "happy" and "angry" expressions. Naïve raters judged the post-FaceMaze "happy" and "angry" expressions of the ASD group as higher in quality than their pre-FaceMaze productions. Moreover, the post-game expressions of the ASD group were rated as equal in quality as the expressions of the TD group.

  14. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  15. A 3-dimensional anthropometric evaluation of facial morphology among Chinese and Greek population.

    PubMed

    Liu, Yun; Kau, Chung How; Pan, Feng; Zhou, Hong; Zhang, Qiang; Zacharopoulos, Georgios Vasileiou

    2013-07-01

    The use of 3-dimensional (3D) facial imaging has taken greater importance as orthodontists use the soft tissue paradigm in the evaluation of skeletal disproportion. Studies have shown that faces defer in populations. To date, no anthropometric evaluations have been made of Chinese and Greek faces. The aim of this study was to compare facial morphologies of Greeks and Chinese using 3D facial anthropometric landmarks. Three-dimensional facial images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMD face system captured 245 subjects from 2 population groups (Chinese [n = 72] and Greek [n = 173]), and each population was categorized into male and female groups for evaluation. All subjects in the group were between 18 and 30 years old and had no apparent facial anomalies. Twenty-five anthropometric landmarks were identified on the 3D faces of each subject. Soft tissue nasion was set as the "zeroed" reference landmark. Twenty landmark distances were constructed and evaluated within 3 dimensions of space. Six angles, 4 proportions, and 1 construct were also calculated. Student t test was used to analyze each data set obtained within each subgroup. Distinct facial differences were noted between the subgroups evaluated. When comparing differences of sexes in 2 populations (eg, male Greeks and male Chinese), significant differences were noted in more than 80% of the landmark distances calculated. One hundred percent of the angular were significant, and the Chinese were broader in width to height facial proportions. In evaluating the lips to the esthetic line, the Chinese population had more protrusive lips. There are differences in the facial morphologies of subjects obtained from a Chinese population versus that of a Greek population.

  16. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  17. Inferior alveolar nerve cutting; legal liability versus desired patient outcomes.

    PubMed

    Kim, Soung Min; Lee, Jong Ho

    2017-10-01

    Mandibular angle reduction or reduction genioplasty is a routine well-known facial contouring surgery that reduces the width of the lower face resulting in an oval shaped face. During the intraoral resection of the mandibular angle or chin using an oscillating saw, unexpected peripheral nerve damage including inferior alveolar nerve (IAN) damage could occur. This study analyzed cases of damaged IANs during facial contouring surgery, and asked what the basic standard of care in these medical litigation-involved cases should be. We retrospectively reviewed a total of 28 patients with IAN damage after mandibular contouring from August 2008 to July 2015. Most of the patients did not have an antipathy to medical staff because they wanted their faces to be ovoid shaped. We summarized three representative cases according to each patient's perceptions and different operation procedures under the approvement by the Institutional Review Board of Seoul National University. Most of the patients did not want to receive any further operations not due to fear of an operation but because of the changes in their facial appearance. Thus, their fear may be due to a desire for a better perfect outcome, and to avoid unsolicited patient complaints related litigation. This article analyzed representative IAN cutting cases that occurred during mandibular contouring esthetic surgery and evaluated a questionnaire on the standard of care for the desired patient outcomes and the specialized surgeon's position with respect to legal liability.

  18. Botulinum toxin and the facial feedback hypothesis: can looking better make you feel happier?

    PubMed

    Alam, Murad; Barrett, Karen C; Hodapp, Robert M; Arndt, Kenneth A

    2008-06-01

    The facial feedback hypothesis suggests that muscular manipulations which result in more positive facial expressions may lead to more positive emotional states in affected individuals. In this essay, we hypothesize that the injection of botulinum toxin for upper face dynamic creases might induce positive emotional states by reducing the ability to frown and create other negative facial expressions. The use of botulinum toxin to pharmacologically alter upper face muscular expressiveness may curtail the appearance of negative emotions, most notably anger, but also fear and sadness. This occurs via the relaxation of the corrugator supercilii and the procerus, which are responsible for brow furrowing, and to a lesser extent, because of the relaxation of the frontalis. Concurrently, botulinum toxin may dampen some positive expressions like the true smile, which requires activity of the orbicularis oculi, a muscle also relaxed after toxin injections. On balance, the evidence suggests that botulinum toxin injections for upper face dynamic creases may reduce negative facial expressions more than they reduce positive facial expressions. Based on the facial feedback hypothesis, this net change in facial expression may potentially have the secondary effect of reducing the internal experience of negative emotions, thus making patients feel less angry, sad, and fearful.

  19. Sad or Fearful? The Influence of Body Posture on Adults' and Children's Perception of Facial Displays of Emotion

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.

    2012-01-01

    The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body…

  20. Development of facial aging simulation system combined with three-dimensional shape prediction from facial photographs

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi; Matsuzaki, Kazutoshi; Taniguchi, Kei; Ogawa, Yoshinori; Imaizumi, Kazuhiko

    2017-03-01

    3D Facial aging changes in more than 10 years of identical persons are being measured at National Research Institute of Police Science. We performed machine learning using such measured data as teacher data and have developed the system which convert input 2D face image into 3D face model and simulate aging. Here, we report about processing and accuracy of our system.

  1. Outcome-dependent coactivation of lip and tongue primary somatosensory representation following hypoglossal-facial transfer after peripheral facial palsy.

    PubMed

    Rottler, Philipp; Schroeder, Henry W S; Lotze, Martin

    2014-02-01

    A hypoglossal-facial transfer is a common surgical strategy for reanimating the face after persistent total hemifacial palsy. We were interested in how motor recovery is associated with cortical reorganization of lip and tongue representation in the primary sensorimotor cortex after the transfer. Therefore, we used functional magnetic resonance imaging (fMRI) in 13 patients who underwent a hypoglossal-facial transfer after unilateral peripheral facial palsy. To identify primary motor and somatosensory tongue and lip representation sites, we measured repetitive tongue and lip movements during fMRI. Electromyography (EMG) of the perioral muscles during tongue and lip movements and standardized evaluation of lip elevation served as outcome parameters. We found an association of cortical representation sites in the pre- and postcentral gyrus (decreased distance of lip and tongue representation) with symmetry of recovered lip movements (lip elevation) and coactivation of the lip during voluntary tongue movements (EMG-activity of the lip during tongue movements). Overall, our study shows that hypoglossal-facial transfer resulted in an outcome-dependent cortical reorganization with activation of the cortical tongue area for restituded movement of the lip. Copyright © 2012 Wiley Periodicals, Inc.

  2. Allergenic Ingredients in Facial Wet Wipes.

    PubMed

    Aschenbeck, Kelly A; Warshaw, Erin M

    Allergic contact dermatitis commonly occurs on the face. Facial cleansing wipes may be an underrecognized source of allergens. The aim of this study was to determine the frequency of potentially allergenic ingredients in facial wet wipes. Ingredient lists from name brand and generic facial wipes from 4 large retailers were analyzed. In the 178 facial wipes examined, a total of 485 ingredients were identified (average, 16.7 ingredients per wipe). Excluding botanicals, the top 15 potentially allergenic ingredients were glycerin (64.0%), fragrance (63.5%), phenoxyethanol (53.9%), citric acid (51.1%), disodium EDTA (44.4%), sorbic acid derivatives (39.3%), tocopherol derivatives (38.8%), polyethylene glycol derivatives (32.6%), glyceryl stearate (31.5%), sodium citrate (29.8%), glucosides (27.5%), cetearyl alcohol (25.8%), propylene glycol (25.3%), sodium benzoate (24.2%), and ceteareth-20 (23.6%)/parabens (23.6%). Of note, methylisothiazolinone (2.2%) and methylchloroisothiazolinone (1.1%) were uncommon. The top potential allergens of botanical origin included Aloe barbadensis (41.0%), chamomile extracts (27.0%), tea extracts (21.3%), Cucumis sativus (20.2%), and Hamamelis virginiana (10.7%). Many potential allergens are present in facial wet wipes, including fragrances, preservatives, botanicals, glucosides, and propylene glycol.

  3. Facial anthropometry of Hong Kong Chinese babies.

    PubMed

    Fok, T F; Hon, K L; So, H K; Wong, E; Ng, P C; Lee, A K Y; Chang, A

    2003-08-01

    To provide a database of the craniofacial measurements of Chinese infants born in Hong Kong. Prospective cross-sectional study. A total of 2371 healthy singleton, born consecutively at the Prince of Wales Hospital and the Union Hospital from June 1998 to June 2000, were included in the study. The range of gestation was 33-42 weeks. Measurements included facial width (FW), facial height (FH), nasal length (NL), nasal width (NW), and length of the philtrum (PhilL). The facial, nasal, nasofacial and nasozygomatic indices were derived. The data show generally higher values for males in the parameters measured. The various indices remained remarkably constant and did not vary significantly between the two genders or with gestation. When compared with previously published data for white people term babies, Chinese babies have similar NW but shorter philtrum length. The human face appears to grow in a remarkably constant fashion as defined by the various indices of facial proportions. This study establishes the first set of gestational age-specific standard of such craniofacial parameters for Chinese new-borns, potentially enabling early syndromal diagnosis. There are significant inter-racial differences in these craniofacial parameters.

  4. Trustworthy-Looking Face Meets Brown Eyes

    PubMed Central

    Kleisner, Karel; Priplatova, Lenka; Frost, Peter; Flegr, Jaroslav

    2013-01-01

    We tested whether eye color influences perception of trustworthiness. Facial photographs of 40 female and 40 male students were rated for perceived trustworthiness. Eye color had a significant effect, the brown-eyed faces being perceived as more trustworthy than the blue-eyed ones. Geometric morphometrics, however, revealed significant correlations between eye color and face shape. Thus, face shape likewise had a significant effect on perceived trustworthiness but only for male faces, the effect for female faces not being significant. To determine whether perception of trustworthiness was being influenced primarily by eye color or by face shape, we recolored the eyes on the same male facial photos and repeated the test procedure. Eye color now had no effect on perceived trustworthiness. We concluded that although the brown-eyed faces were perceived as more trustworthy than the blue-eyed ones, it was not brown eye color per se that caused the stronger perception of trustworthiness but rather the facial features associated with brown eyes. PMID:23326406

  5. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  6. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  7. Face transplant: a paradigm change in facial reconstruction.

    PubMed

    Lantieri, Laurent

    2012-01-01

    Face transplantation is a new surgical technique that could be considered as a paradigm change in facial reconstruction. In recent years, 17 cases have been realized around the world. The author reviews these cases enlightened by his personal 7 cases.

  8. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    PubMed

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P < 0.005; UF 1961 ± 1779mm, P = 0.002; LF 1608 ± 1404mm, P = 0.002; HM 849 ± 642mm, P < 0.001). UF movement was greater than LF movement at all exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P < 0.001). Significant medium to large non-linear relationships were found between facial movement and power output (r 2  = 0.24-0.31), HR (r 2  = 0.26-0.33), [La - ] (r 2  = 0.33-0.44) and RPE (r 2  = 0.38-0.45). The findings demonstrate the potential utility of facial feature tracking as a non-invasive, psychophysiological measure to potentially assess exercise intensity.

  9. Norm-based coding of facial identity in adults with autism spectrum disorder.

    PubMed

    Walsh, Jennifer A; Maurer, Daphne; Vida, Mark D; Rhodes, Gillian; Jeffery, Linda; Rutherford, M D

    2015-03-01

    It is unclear whether reported deficits in face processing in individuals with autism spectrum disorders (ASD) can be explained by deficits in perceptual face coding mechanisms. In the current study, we examined whether adults with ASD showed evidence of norm-based opponent coding of facial identity, a perceptual process underlying the recognition of facial identity in typical adults. We began with an original face and an averaged face and then created an anti-face that differed from the averaged face in the opposite direction from the original face by a small amount (near adaptor) or a large amount (far adaptor). To test for norm-based coding, we adapted participants on different trials to the near versus far adaptor, then asked them to judge the identity of the averaged face. We varied the size of the test and adapting faces in order to reduce any contribution of low-level adaptation. Consistent with the predictions of norm-based coding, high functioning adults with ASD (n = 27) and matched typical participants (n = 28) showed identity aftereffects that were larger for the far than near adaptor. Unlike results with children with ASD, the strength of the aftereffects were similar in the two groups. This is the first study to demonstrate norm-based coding of facial identity in adults with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion.

    PubMed

    Guo, Kun; Soornack, Yoshi; Settle, Rebecca

    2018-03-05

    Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Cross-Cultural Agreement in Facial Attractiveness Preferences: The Role of Ethnicity and Gender

    PubMed Central

    Coetzee, Vinet; Greeff, Jaco M.; Stephen, Ian D.; Perrett, David I.

    2014-01-01

    Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences. PMID:24988325

  12. Cross-cultural agreement in facial attractiveness preferences: the role of ethnicity and gender.

    PubMed

    Coetzee, Vinet; Greeff, Jaco M; Stephen, Ian D; Perrett, David I

    2014-01-01

    Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences.

  13. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  14. The Clinical Efficacy of Autologous Platelet-Rich Plasma Combined with Ultra-Pulsed Fractional CO2 Laser Therapy for Facial Rejuvenation

    PubMed Central

    Hui, Qiang; Chang, Peng; Guo, Bingyu; Zhang, Yu

    2017-01-01

    Abstract Ultra-pulsed fractional CO2 laser is an efficient, precise, and safe therapeutic intervention for skin refreshing, although accompanied with prolonged edema and erythema. In recent years, autologous platelet-rich plasma (PRP) has been proven to promote wound and soft tissue healing and collagen regeneration. To investigate whether the combination of PRP and ultra-pulsed fractional CO2 laser had a synergistic effect on therapy for facial rejuvenation. Totally, 13 facial aging females were treated with ultra-pulsed fractional CO2 laser. One side of the face was randomly selected as experimental group and injected with PRP, the other side acted as the control group and was injected with physiological saline at the same dose. Comprehensive assessment of clinical efficacy was performed by satisfaction scores, dermatologists' double-blind evaluation and the VISIA skin analysis system. After treatment for 3 months, subjective scores of facial wrinkles, skin texture, and skin elasticity were higher than that in the control group. Similarly, improvement of skin wrinkles, texture, and tightness in the experimental group was better compared with the control group. Additionally, the total duration of erythema, edema, and crusting was decreased, in the experimental group compared with the control group. PRP combined with ultra-pulsed fractional CO2 laser had a synergistic effect on facial rejuvenation, shortening duration of side effects, and promoting better therapeutic effect. PMID:27222038

  15. Face-body integration of intense emotional expressions of victory and defeat.

    PubMed

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory.

  16. Face-body integration of intense emotional expressions of victory and defeat

    PubMed Central

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory. PMID:28245245

  17. Modulation of α power and functional connectivity during facial affect recognition.

    PubMed

    Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan

    2013-04-03

    Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.

  18. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    PubMed

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Objective estimation of patient age through a new composite scale for facial aging assessment: The face - Objective assessment scale.

    PubMed

    La Padula, Simone; Hersant, Barbara; SidAhmed, Mounia; Niddam, Jeremy; Meningaud, Jean Paul

    2016-07-01

    Most patients requesting aesthetic rejuvenation treatment expect to look healthier and younger. Some scales for ageing assessment have been proposed, but none is focused on patient age prediction. The aim of this study was to develop and validate a new facial rating scale assessing facial ageing sign severity. One thousand Caucasian patients were included and assessed. The Rasch model was used as part of the validation process. A score was attributed to each patient, based on the scales we developed. The correlation between the real age and scores obtained, the inter-rater reliability and test-retest reliability were analysed. The objective was to develop a tool enabling the assigning of a patient to a specific age range based on the calculated score. All scales exceeded criteria for acceptability, reliability and validity. The real age strongly correlated with the total facial score in both sex groups. The test-retest reliability confirmed this strong correlation. We developed a facial ageing scale which could be a useful tool to assess patients before and after rejuvenation treatment and an important new metrics to be used in facial rejuvenation and regenerative clinical research. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  1. Enhancing Facial Aesthetics with Muscle Retraining Exercises-A Review

    PubMed Central

    D’souza, Raina; Kini, Ashwini; D’souza, Henston; Shetty, Omkar

    2014-01-01

    Facial attractiveness plays a key role in social interaction. ‘Smile’ is not only a single category of facial behaviour, but also the emotion of frank joy which is expressed on the face by the combined contraction of the muscles involved. When a patient visits the dental clinic for aesthetic reasons, the dentist considers not only the chief complaint but also the overall harmony of the face. This article describes muscle retraining exercises to achieve control over facial movements and improve facial appearance which may be considered following any type of dental rehabilitation. Muscle conditioning, training and strengthening through daily exercises will help to counter balance the aging effects. PMID:25302289

  2. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  3. Putative golden proportions as predictors of facial esthetics in adolescents.

    PubMed

    Kiekens, Rosemie M A; Kuijpers-Jagtman, Anne Marie; van 't Hof, Martin A; van 't Hof, Bep E; Maltha, Jaap C

    2008-10-01

    In orthodontics, facial esthetics is assumed to be related to golden proportions apparent in the ideal human face. The aim of the study was to analyze the putative relationship between facial esthetics and golden proportions in white adolescents. Seventy-six adult laypeople evaluated sets of photographs of 64 adolescents on a visual analog scale (VAS) from 0 to 100. The facial esthetic value of each subject was calculated as a mean VAS score. Three observers recorded the position of 13 facial landmarks included in 19 putative golden proportions, based on the golden proportions as defined by Ricketts. The proportions and each proportion's deviation from the golden target (1.618) were calculated. This deviation was then related to the VAS scores. Only 4 of the 19 proportions had a significant negative correlation with the VAS scores, indicating that beautiful faces showed less deviation from the golden standard than less beautiful faces. Together, these variables explained only 16% of the variance. Few golden proportions have a significant relationship with facial esthetics in adolescents. The explained variance of these variables is too small to be of clinical importance.

  4. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    PubMed

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  6. Developmental Changes in the Perception of Adult Facial Age

    ERIC Educational Resources Information Center

    Gross, Thomas F.

    2007-01-01

    The author studied children's (aged 5-16 years) and young adults' (aged 18-22 years) perception and use of facial features to discriminate the age of mature adult faces. In Experiment 1, participants rated the age of unaltered and transformed (eyes, nose, eyes and nose, and whole face blurred) adult faces (aged 20-80 years). In Experiment 2,…

  7. Extensive actinomycosis of the face requiring radical resection and facial nerve reconstruction.

    PubMed

    Iida, Takuya; Takushima, Akihiko; Asato, Hirotaka; Harii, Kiyonori

    2006-01-01

    We present a case of extensive actinomycosis of the face, which appeared after dental surgery. Since antibiotic therapy was ineffective, the lesion was radically resected, and the skin, soft tissue and facial nerve were reconstructed using a free rectus abdominis musculocutaneous flap and simultaneously harvested intercostal nerves. Successful reanimation of the face was achieved 14 months postoperatively.

  8. Brief Report: Sensitivity of Children with Autism Spectrum Disorders to Face Appearance in Selective Trust

    ERIC Educational Resources Information Center

    Li, Pengli; Zhang, Chunhua; Yi, Li

    2016-01-01

    The current study examined how children with Autism Spectrum Disorders (ASD) could selectively trust others based on three facial cues: the face race, attractiveness, and trustworthiness. In a computer-based hide-and-seek game, two face images, which differed significantly in one of the three facial cues, were presented as two cues for selective…

  9. FaceTOON: a unified platform for feature-based cartoon expression generation

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine

    2008-02-01

    This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.

  10. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  11. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children

    PubMed Central

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano

    2015-01-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651

  12. Agency and facial emotion judgment in context.

    PubMed

    Ito, Kenichi; Masuda, Takahiko; Li, Liman Man Wai

    2013-06-01

    Past research showed that East Asians' belief in holism was expressed as their tendencies to include background facial emotions into the evaluation of target faces more than North Americans. However, this pattern can be interpreted as North Americans' tendency to downplay background facial emotions due to their conceptualization of facial emotion as volitional expression of internal states. Examining this alternative explanation, we investigated whether different types of contextual information produce varying degrees of effect on one's face evaluation across cultures. In three studies, European Canadians and East Asians rated the intensity of target facial emotions surrounded with either affectively salient landscape sceneries or background facial emotions. The results showed that, although affectively salient landscapes influenced the judgment of both cultural groups, only European Canadians downplayed the background facial emotions. The role of agency as differently conceptualized across cultures and multilayered systems of cultural meanings are discussed.

  13. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    PubMed

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  14. Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.

    PubMed

    Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C

    2011-03-01

    Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.

  15. Facial paralysis

    MedlinePlus

    ... a physical, speech, or occupational therapist. If facial paralysis from Bell palsy lasts for more than 6 to 12 months, plastic surgery may be recommended to help the eye close and improve the appearance of the face. Alternative Names Paralysis of the face Images Ptosis, drooping of the ...

  16. We look like our names: The manifestation of name stereotypes in facial appearance.

    PubMed

    Zwebner, Yonat; Sellier, Anne-Laure; Rosenfeld, Nir; Goldenberg, Jacob; Mayo, Ruth

    2017-04-01

    Research demonstrates that facial appearance affects social perceptions. The current research investigates the reverse possibility: Can social perceptions influence facial appearance? We examine a social tag that is associated with us early in life-our given name. The hypothesis is that name stereotypes can be manifested in facial appearance, producing a face-name matching effect , whereby both a social perceiver and a computer are able to accurately match a person's name to his or her face. In 8 studies we demonstrate the existence of this effect, as participants examining an unfamiliar face accurately select the person's true name from a list of several names, significantly above chance level. We replicate the effect in 2 countries and find that it extends beyond the limits of socioeconomic cues. We also find the effect using a computer-based paradigm and 94,000 faces. In our exploration of the underlying mechanism, we show that existing name stereotypes produce the effect, as its occurrence is culture-dependent. A self-fulfilling prophecy seems to be at work, as initial evidence shows that facial appearance regions that are controlled by the individual (e.g., hairstyle) are sufficient to produce the effect, and socially using one's given name is necessary to generate the effect. Together, these studies suggest that facial appearance represents social expectations of how a person with a specific name should look. In this way a social tag may influence one's facial appearance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Cost analysis of conventional facial reconstruction procedures followed by face transplantation.

    PubMed

    Siemionow, M; Gatherwright, J; Djohan, R; Papay, F

    2011-02-01

    For the first time, this study analyzes the cost of multiple conventional reconstructions and face transplantation in a single patient. This patient is a 46-year-old female victim of a shotgun blast resulting in loss of multiple functional and aesthetic subunits. For over 5 years, she underwent multiple conventional reconstructions with suboptimal results. In December 2008, she became the recipient of the first U.S. face transplant. This has provided the unique opportunity to present the cost of 23 separate conventional reconstructive procedures and the first face transplant in the United States. The combined cost of conventional reconstructive procedures and the first U.S. face transplant was calculated to be $353 480 and $349 959, respectively. The combined cost posttransplant totaled $115 463. The direct cost pretransplant was $206 646, $232 893 peritransplant and $74 236 posttransplant. The two largest areas of cost utilization were surgical ($79 625; 38.5%) and nursing ($55 860; 27%), followed by anesthesia ($24 808; 12%) and pharmacy ($16 581; 8%). This study demonstrates that the cost of the first U.S. face transplant is similar to multiple conventional reconstructions. Although the cost of facial transplantation is considerable, the alleviation of psychological and physiological suffering, exceptional functional recovery and fulfillment of long-lasting hope for social reintegration may be priceless. ©2011 The Authors Journal compilation©2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  18. Low-level image properties in facial expressions.

    PubMed

    Menzel, Claudia; Redies, Christoph; Hayn-Leichsenring, Gregor U

    2018-06-04

    We studied low-level image properties of face photographs and analyzed whether they change with different emotional expressions displayed by an individual. Differences in image properties were measured in three databases that depicted a total of 167 individuals. Face images were used either in their original form, cut to a standard format or superimposed with a mask. Image properties analyzed were: brightness, redness, yellowness, contrast, spectral slope, overall power and relative power in low, medium and high spatial frequencies. Results showed that image properties differed significantly between expressions within each individual image set. Further, specific facial expressions corresponded to patterns of image properties that were consistent across all three databases. In order to experimentally validate our findings, we equalized the luminance histograms and spectral slopes of three images from a given individual who showed two expressions. Participants were significantly slower in matching the expression in an equalized compared to an original image triad. Thus, existing differences in these image properties (i.e., spectral slope, brightness or contrast) facilitate emotion detection in particular sets of face images. Copyright © 2018. Published by Elsevier B.V.

  19. Functional MRI of facial emotion processing in left temporal lobe epilepsy.

    PubMed

    Szaflarski, Jerzy P; Allendorfer, Jane B; Heyse, Heidi; Mendoza, Lucy; Szaflarski, Basia A; Cohen, Nancy

    2014-03-01

    Temporal lobe epilepsy (TLE) may negatively affect the ability to recognize emotions. This study aimed to determine the cortical correlates of facial emotion processing (happy, sad, fearful, and neutral) in patients with well-characterized left TLE (LTLE) and to examine the effect of seizure control on emotion processing. We enrolled 34 consecutive patients with LTLE and 30 matched healthy control (HC) subjects. Participants underwent functional MRI (fMRI) with an event-related facial emotion recognition task. The seizures of seventeen patients were controlled (no seizure in at least 3months; LTLE-sz), and 17 continued to experience frequent seizures (LTLE+sz). Mood was assessed with the Beck Depression Inventory (BDI) and the Profile of Mood States (POMS). There were no differences in demographic characteristics and measures of mood between HC subjects and patients with LTLE. In patients with LTLE, fMRI showed decreased blood oxygenation level dependent (BOLD) signal in the hippocampus/parahippocampus and cerebellum in processing of happy faces and increased BOLD signal in occipital regions in response to fearful faces. Comparison of groups with LTLE+sz and LTLE-sz showed worse BDI and POMS scores in LTLE+sz (all p<0.05) except for POMS tension/anxiety (p=0.067). Functional MRI revealed increased BOLD signal in patients with LTLE+sz in the left precuneus and left parahippocampus for "fearful" faces and in the left periarcheocortex for "neutral" faces. There was a correlation between the fMRI and Total Mood Disturbance in the left precuneus in LTLE-sz (p=0.019) and in LTLE+sz (p=0.018). Overall, LTLE appears to have a relatively minor effect on the cortical underpinnings of facial emotion processing, while the effect of seizure state (controlled vs. not controlled) is more pronounced, indicating a significant relationship between seizure control and emotion processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Regional facial asymmetries and attractiveness of the face.

    PubMed

    Kaipainen, Anu E; Sieber, Kevin R; Nada, Rania M; Maal, Thomas J; Katsaros, Christos; Fudalej, Piotr S

    2016-12-01

    Facial attractiveness is an important factor in our social interactions. It is still not entirely clear which factors influence the attractiveness of a face and facial asymmetry appears to play a certain role. The aim of the present study was to assess the association between facial attractiveness and regional facial asymmetries evaluated on three-dimensional (3D) images. 3D facial images of 59 (23 male, 36 female) young adult patients (age 16-25 years) before orthodontic treatment were evaluated for asymmetry. The same 3D images were presented to 12 lay judges who rated the attractiveness of each subject on a 100mm visual analogue scale. Reliability of the method was assessed with Bland-Altman plots and Cronbach's alpha coefficient. All subjects showed a certain amount of asymmetry in all regions of the face; most asymmetry was found in the chin and cheek areas and less in the lip, nose and forehead areas. No statistically significant differences in regional facial asymmetries were found between male and female subjects (P > 0.05). Regression analyses demonstrated that the judgement of facial attractiveness was not influenced by absolute regional facial asymmetries when gender, facial width-to-height ratio and type of malocclusion were controlled (P > 0.05). A potential limitation of the study could be that other biologic and cultural factors influencing the perception of facial attractiveness were not controlled for. A small amount of asymmetry was present in all subjects assessed in this study, and asymmetry of this magnitude may not influence the assessment of facial attractiveness. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. Quantitative Anthropometric Measures of Facial Appearance of Healthy Hispanic/Latino White Children: Establishing Reference Data for Care of Cleft Lip With or Without Cleft Palate

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Ku, Brian; Combs, Patrick D.; Da Silveira, Adriana. C.; Markey, Mia K.

    2017-06-01

    Cleft lip with or without cleft palate (CL ± P) is one of the most common congenital facial deformities worldwide. To minimize negative social consequences of CL ± P, reconstructive surgery is conducted to modify the face to a more normal appearance. Each race/ethnic group requires its own facial norm data, yet there are no existing facial norm data for Hispanic/Latino White children. The objective of this paper is to identify measures of facial appearance relevant for planning reconstructive surgery for CL ± P of Hispanic/Latino White children. Quantitative analysis was conducted on 3D facial images of 82 (41 girls, 41 boys) healthy Hispanic/Latino White children whose ages ranged from 7 to 12 years. Twenty-eight facial anthropometric features related to CL ± P (mainly in the nasal and mouth area) were measured from 3D facial images. In addition, facial aesthetic ratings were obtained from 16 non-clinical observers for the same 3D facial images using a 7-point Likert scale. Pearson correlation analysis was conducted to find features that were correlated with the panel ratings of observers. Boys with a longer face and nose, or thicker upper and lower lips are considered more attractive than others while girls with a less curved middle face contour are considered more attractive than others. Associated facial landmarks for these features are primary focus areas for reconstructive surgery for CL ± P. This study identified anthropometric measures of facial features of Hispanic/Latino White children that are pertinent to CL ± P and which correlate with the panel attractiveness ratings.

  2. Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2012-01-01

    Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597

  3. Aggression differentially modulates brain responses to fearful and angry faces: an exploratory study.

    PubMed

    Lu, Hui; Wang, Yu; Xu, Shuang; Wang, Yifeng; Zhang, Ruiping; Li, Tsingan

    2015-08-19

    Aggression is reported to modulate neural responses to the threatening information. However, whether aggression can modulate neural response to different kinds of threatening facial expressions (angry and fearful expressions) remains unknown. Thus, event-related potentials were measured in individuals (13 high aggressive, 12 low aggressive) exposed to neutral, angry, and fearful facial expressions while performing a frame-distinguishing task, irrespective of the emotional valence of the expressions. Highly aggressive participants showed no distinct neural responses between the three facial expressions. In addition, compared with individuals with low aggression, highly aggressive individuals showed a decreased frontocentral response to fearful faces within 250-300 ms and to angry faces within 400-500 ms of exposure. These results indicate that fearful faces represent a more threatening signal requiring a quick cognitive response during the early stage of facial processing, whereas angry faces elicit a stronger response during the later processing stage because of its eminent emotional significance. The present results represent the first known evidence that aggression is associated with different neural responses to fearful and angry faces. By exploring the distinct temporal responses to fearful and angry faces modulated by aggression, this study more precisely characterizes the cognitive characteristics of aggressive individuals. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

  4. The individual, environmental, and organizational factors that influence nurses' use of facial protection to prevent occupational transmission of communicable respiratory illness in acute care hospitals.

    PubMed

    Nichol, Kathryn; Bigelow, Philip; O'Brien-Pallas, Linda; McGeer, Allison; Manno, Mike; Holness, D Linn

    2008-09-01

    Communicable respiratory illness is an important cause of morbidity among nurses. One of the key reasons for occupational transmission of this illness is the failure to implement appropriate barrier precautions, particularly facial protection. The objectives of this study were to describe the factors that influence nurses' decisions to use facial protection and to determine their relative importance in predicting compliance. This cross-sectional survey was conducted in 9 units of 2 urban hospitals in which nursing staff regularly use facial protection. A total of 400 self-administered questionnaires were provided to nurses, and 177 were returned (44% response rate). Less than half of respondents reported compliance with the recommended use of facial protection (eye/face protection, respirators, and surgical masks) to prevent occupational transmission of communicable respiratory disease. Multivariate analysis showed 5 factors to be key predictors of nurses' compliance with the recommended use of facial protection. These factors include full-time work status, greater than 5 years tenure as a nurse, at least monthly use of facial protection, a belief that media coverage of infectious diseases impacts risk perception and work practices, and organizational support for health and safety. Strategies and interventions based on these findings should result in enhanced compliance with facial protection and, ultimately, a reduction in occupational transmission of communicable respiratory illness.

  5. Injuries and absenteeism among motorcycle taxi drivers who are victims of traffic accidents.

    PubMed

    Barbosa, Kevan G N; Lucas-Neto, Alfredo; Gama, Bruno D; Lima-Neto, Jose C; Lucas, Rilva Suely C C; d'Ávila, Sérgio

    2014-08-01

    Facial injuries frequently occur in traffic accidents involving motorcycles. The purpose of this study was to determine the prevalence of facial injuries among motorcycle drivers who perform motorcycle taxi service. The study design was cross-sectional. A total of 210 participants who served as motorcycle taxi drivers in a city in northeastern Brazil completed a survey concerning their experience of accidents involving facial injuries and consequent hospitalization and absenteeism from work. The motorcycle drivers included in the study were randomly selected from a list provided by the city. Out of the respondents, 165 (78.6%) who were involved in traffic accidents in the last 12 months, 15 (9.1%) reported facial injuries. The types of facial injury most frequently reported involved soft tissues (n = 8; 53.3%), followed by simple fracture (n = 4; 26.7%) and dentoalveolar fracture (n = 3; 20%). We found an association between facial injuries and absenteeism, as well as an association between the presence of facial injury and the need for hospitalization for a period of 2 days or more. Respondents reported that they had accidents, but due to the use of full face motorcycle helmet the number of facial injuries was low. For most of them, absenteeism was observed for a period of one month or more. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  6. Interest and attention in facial recognition.

    PubMed

    Burgess, Melinda C R; Weaver, George E

    2003-04-01

    When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.

  7. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  8. Soft Tissue Management in Facial Trauma

    PubMed Central

    Braun, Tara L.; Maricevich, Renata S.

    2017-01-01

    The management of soft tissue injury after facial trauma poses unique challenges to the plastic surgeon, given the specialized nature of facial tissue and the aesthetic importance of the face. The general principles of trauma management and wound care are applied in all cases. The management of severe injuries to the face is discussed in relation to the location and the mechanism of injury. Facial transplants have arisen in the past decade for the management of catastrophic soft tissue defects, although high morbidity and mortality after these non-life-saving operations must be considered in patient selection. PMID:28496386

  9. Impressions of dominance are made relative to others in the visual environment.

    PubMed

    Re, Daniel E; Lefevre, Carmen E; DeBruine, Lisa M; Jones, Benedict C; Perrett, David I

    2014-03-27

    Face judgments of dominance play an important role in human social interaction. Perceived facial dominance is thought to indicate physical formidability, as well as resource acquisition and holding potential. Dominance cues in the face affect perceptions of attractiveness, emotional state, and physical strength. Most experimental paradigms test perceptions of facial dominance in individual faces, or they use manipulated versions of the same face in a forced-choice task but in the absence of other faces. Here, we extend this work by assessing whether dominance ratings are absolute or are judged relative to other faces. We presented participants with faces to be rated for dominance (target faces), while also presenting a second face (non-target faces) that was not to be rated. We found that both the masculinity and sex of the non-target face affected dominance ratings of the target face. Masculinized non-target faces decreased the perceived dominance of a target face relative to a feminized non-target face, and displaying a male non-target face decreased perceived dominance of a target face more so than a female non-target face. Perceived dominance of male target faces was affected more by masculinization of male non-target faces than female non-target faces. These results indicate that dominance perceptions can be altered by surrounding faces, demonstrating that facial dominance is judged at least partly relative to other faces.

  10. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  11. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  12. Modeling 3D Facial Shape from DNA

    PubMed Central

    Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.

    2014-01-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  13. Facial soft tissue thickness differences among three skeletal classes in Japanese population.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Kibayashi, Kazuhiko

    2014-03-01

    Facial reconstruction is used in forensic anthropology to recreate the face from unknown human skeletal remains, and to elucidate the antemortem facial appearance. This requires accurate assessment of the skull (age, sex, ancestry, etc.) and thickness data. However, additional information is required to reconstruct the face as the information obtained from the skull is limited. Here, we aimed to examine the information from the skull that is required for accurate facial reconstruction. The human facial profile is classified into 3 shapes: straight, convex, and concave. These facial profiles facilitate recognition of individuals. The skeletal classes used in orthodontics are classified according to these 3 facial types. We have previously reported the differences between Japanese females. In the present study, we applied this classification for facial tissue measurement, compared the differences in tissue depth of each skeletal class for both sexes in the Japanese population, and elucidated the differences between the skeletal classes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. A Quantitative Assessment of Lip Movements in Different Facial Expressions Through 3-Dimensional on 3-Dimensional Superimposition: A Cross-Sectional Study.

    PubMed

    Gibelli, Daniele; Codari, Marina; Pucciarelli, Valentina; Dolci, Claudia; Sforza, Chiarella

    2017-11-23

    The quantitative assessment of facial modifications from mimicry is of relevant interest for the rehabilitation of patients who can no longer produce facial expressions. This study investigated a novel application of 3-dimensional on 3-dimensional superimposition for facial mimicry. This cross-sectional study was based on 10 men 30 to 40 years old who underwent stereophotogrammetry for neutral, happy, sad, and angry expressions. Registration of facial expressions on the neutral expression was performed. Root mean square (RMS) point-to-point distance in the labial area was calculated between each facial expression and the neutral one and was considered the main parameter for assessing facial modifications. In addition, effect size (Cohen d) was calculated to assess the effects of labial movements in relation to facial modifications. All participants were free from possible facial deformities, pathologies, or trauma that could affect facial mimicry. RMS values of facial areas differed significantly among facial expressions (P = .0004 by Friedman test). The widest modifications of the lips were observed in happy expressions (RMS, 4.06 mm; standard deviation [SD], 1.14 mm), with a statistically relevant difference compared with the sad (RMS, 1.42 mm; SD, 1.15 mm) and angry (RMS, 0.76 mm; SD, 0.45 mm) expressions. The effect size of labial versus total face movements was limited for happy and sad expressions and large for the angry expression. This study found that a happy expression provides wider modifications of the lips than the other facial expressions and suggests a novel procedure for assessing regional changes from mimicry. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Concurrent development of facial identity and expression discrimination.

    PubMed

    Dalrymple, Kirsten A; Visconti di Oleggio Castello, Matteo; Elison, Jed T; Gobbini, M Ida

    2017-01-01

    Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise). The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster) on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.

  16. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study.

    PubMed

    Mavratzakis, Aimee; Herbert, Cornelia; Walla, Peter

    2016-01-01

    In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250ms) than for scenes (500ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. A study of patient facial expressivity in relation to orthodontic/surgical treatment.

    PubMed

    Nafziger, Y J

    1994-09-01

    A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.

  18. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  19. Virtual faces expressing emotions: an initial concomitant and construct validity study.

    PubMed

    Joyal, Christian C; Jacob, Laurence; Cigna, Marie-Hélène; Guay, Jean-Pierre; Renaud, Patrice

    2014-01-01

    Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. The goal of this study was to initially assess concomitants and construct validity of a newly developed set of virtual faces expressing six fundamental emotions (happiness, surprise, anger, sadness, fear, and disgust). Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles), and regional gaze fixation latencies (eyes and mouth regions) were compared in 41 adult volunteers (20 ♂, 21 ♀) during the presentation of video clips depicting real vs. virtual adults expressing emotions. Emotions expressed by each set of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feedforward interactions based on facial emotion expressions can also be conducted with these stimuli.

  20. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    PubMed

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

Top