Sample records for facial component classification

  1. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  2. Targeting specific facial variation for different identification tasks.

    PubMed

    Aeria, Gillian; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    A conceptual framework that allows faces to be studied and compared objectively with biological validity is presented. The framework is a logical extension of modern morphometrics and statistical shape analysis techniques. Three dimensional (3D) facial scans were collected from 255 healthy young adults. One scan depicted a smiling facial expression and another scan depicted a neutral expression. These facial scans were modelled in a Principal Component Analysis (PCA) space where Euclidean (ED) and Mahalanobis (MD) distances were used to form similarity measures. Within this PCA space, property pathways were calculated that expressed the direction of change in facial expression. Decomposition of distances into property-independent (D1) and dependent components (D2) along these pathways enabled the comparison of two faces in terms of the extent of a smiling expression. The performance of all distances was tested and compared in dual types of experiments: Classification tasks and a Recognition task. In the Classification tasks, individual facial scans were assigned to one or more population groups of smiling or neutral scans. The property-dependent (D2) component of both Euclidean and Mahalanobis distances performed best in the Classification task, by correctly assigning 99.8% of scans to the right population group. The recognition task tested if a scan of an individual depicting a smiling/neutral expression could be positively identified when shown a scan of the same person depicting a neutral/smiling expression. ED1 and MD1 performed best, and correctly identified 97.8% and 94.8% of individual scans respectively as belonging to the same person despite differences in facial expression. It was concluded that decomposed components are superior to straightforward distances in achieving positive identifications and presents a novel method for quantifying facial similarity. Additionally, although the undecomposed Mahalanobis distance often used in practice outperformed that of the Euclidean, it was the opposite result for the decomposed distances. Crown Copyright 2010. Published by Elsevier Ireland Ltd. All rights reserved.

  3. The fallopian canal: a comprehensive review and proposal of a new classification.

    PubMed

    Mortazavi, M M; Latif, B; Verma, K; Adeeb, N; Deep, A; Griessenauer, C J; Tubbs, R S; Fukushima, T

    2014-03-01

    The facial nerve follows a complex course through the skull base. Understanding its anatomy is crucial during standard skull base approaches and resection of certain skull base tumors closely related to the nerve, especially, tumors at the cerebellopontine angle. Herein, we review the fallopian canal and its implications in surgical approaches to the skull base. Furthermore, we suggest a new classification. Based on the anatomy and literature, we propose that the meatal segment of the facial nerve be included as a component of the fallopian canal. A comprehensive knowledge of the course of the facial nerve is important to those who treat patients with pathology of or near this cranial nerve.

  4. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  5. Face recognition using an enhanced independent component analysis approach.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2007-03-01

    This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.

  6. Computer Recognition of Facial Profiles

    DTIC Science & Technology

    1974-08-01

    facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class

  7. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  8. Grouping patients for masseter muscle genotype-phenotype studies.

    PubMed

    Moawad, Hadwah Abdelmatloub; Sinanan, Andrea C M; Lewis, Mark P; Hunt, Nigel P

    2012-03-01

    To use various facial classifications, including either/both vertical and horizontal facial criteria, to assess their effects on the interpretation of masseter muscle (MM) gene expression. Fresh MM biopsies were obtained from 29 patients (age, 16-36 years) with various facial phenotypes. Based on clinical and cephalometric analysis, patients were grouped using three different classifications: (1) basic vertical, (2) basic horizontal, and (3) combined vertical and horizontal. Gene expression levels of the myosin heavy chain genes MYH1, MYH2, MYH3, MYH6, MYH7, and MYH8 were recorded using quantitative reverse transcriptase polymerase chain reaction (RT-PCR) and were related to the various classifications. The significance level for statistical analysis was set at P ≤ .05. Using classification 1, none of the MYH genes were found to be significantly different between long face (LF) patients and the average vertical group. Using classification 2, MYH3, MYH6, and MYH7 genes were found to be significantly upregulated in retrognathic patients compared with prognathic and average horizontal groups. Using classification 3, only the MYH7 gene was found to be significantly upregulated in retrognathic LF compared with prognathic LF, prognathic average vertical faces, and average vertical and horizontal groups. The use of basic vertical or basic horizontal facial classifications may not be sufficient for genetics-based studies of facial phenotypes. Prognathic and retrognathic facial phenotypes have different MM gene expressions; therefore, it is not recommended to combine them into one single group, even though they may have a similar vertical facial phenotype.

  9. The Role of Facial Attractiveness and Facial Masculinity/Femininity in Sex Classification of Faces

    PubMed Central

    Hoss, Rebecca A.; Ramsey, Jennifer L.; Griffin, Angela M.; Langlois, Judith H.

    2005-01-01

    We tested whether adults (Experiment 1) and 4–5-year-old children (Experiment 2) identify the sex of high attractive faces faster and more accurately than low attractive faces in a reaction time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults’ sex classification of both female and male faces and children’s sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independent of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman’s (1990) averageness theory of attractiveness. PMID:16457167

  10. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  11. Real-time speech-driven animation of expressive talking faces

    NASA Astrophysics Data System (ADS)

    Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli

    2011-05-01

    In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.

  12. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  13. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  14. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  15. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  16. Neural Processing of Facial Identity and Emotion in Infants at High-Risk for Autism Spectrum Disorders

    PubMed Central

    Fox, Sharon E.; Wagner, Jennifer B.; Shrock, Christine L.; Tager-Flusberg, Helen; Nelson, Charles A.

    2013-01-01

    Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7-month-old infants at high-risk for developing autism and typically developing controls at low-risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near-Infrared Spectroscopy. In addition, we employed independent component analysis, as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. PMID:23576966

  17. The effects of facial color and inversion on the N170 event-related potential (ERP) component.

    PubMed

    Minami, T; Nakajima, K; Changvisommid, L; Nakauchi, S

    2015-12-17

    Faces are important for social interaction because much can be perceived from facial details, including a person's race, age, and mood. Recent studies have shown that both configural (e.g. face shape and inversion) and surface information (e.g. surface color and reflectance properties) are important for face perception. Therefore, the present study examined the effects of facial color and inverted face properties on event-related potential (ERP) responses, particularly the N170 component. Stimuli consisted of natural and bluish-colored faces. Faces were presented in both upright and upside down orientations. An ANOVA was used to analyze N170 amplitudes and verify the effects of the main independent variables. Analysis of N170 amplitude revealed the significant interactions between stimulus orientation and color. Subsequent analysis indicated that N170 was larger for bluish-colored faces than natural-colored faces, and N170 to natural-colored faces was larger in response to inverted stimulus as compared to upright stimulus. Additionally, a multivariate pattern analysis (MVPA) investigated face-processing dynamics without any prior assumptions. Results distinguished, above chance, both facial color and orientation from single-trial electroencephalogram (EEG) signals. Decoding performance for color classification of inverted faces was significantly diminished as compared to an upright orientation. This suggests that processing orientation is predominant over facial color. Taken together, the present findings elucidate the temporal and spatial distribution of orientation and color processing during face processing. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. A PCA-Based method for determining craniofacial relationship and sexual dimorphism of facial shapes.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Maddock, Steve; He, Taiping; Wang, Xingce; Deng, Qingqiong

    2017-11-01

    Previous studies have used principal component analysis (PCA) to investigate the craniofacial relationship, as well as sex determination using facial factors. However, few studies have investigated the extent to which the choice of principal components (PCs) affects the analysis of craniofacial relationship and sexual dimorphism. In this paper, we propose a PCA-based method for visual and quantitative analysis, using 140 samples of 3D heads (70 male and 70 female), produced from computed tomography (CT) images. There are two parts to the method. First, skull and facial landmarks are manually marked to guide the model's registration so that dense corresponding vertices occupy the same relative position in every sample. Statistical shape spaces of the skull and face in dense corresponding vertices are constructed using PCA. Variations in these vertices, captured in every principal component (PC), are visualized to observe shape variability. The correlations of skull- and face-based PC scores are analysed, and linear regression is used to fit the craniofacial relationship. We compute the PC coefficients of a face based on this craniofacial relationship and the PC scores of a skull, and apply the coefficients to estimate a 3D face for the skull. To evaluate the accuracy of the computed craniofacial relationship, the mean and standard deviation of every vertex between the two models are computed, where these models are reconstructed using real PC scores and coefficients. Second, each PC in facial space is analysed for sex determination, for which support vector machines (SVMs) are used. We examined the correlation between PCs and sex, and explored the extent to which the choice of PCs affects the expression of sexual dimorphism. Our results suggest that skull- and face-based PCs can be used to describe the craniofacial relationship and that the accuracy of the method can be improved by using an increased number of face-based PCs. The results show that the accuracy of the sex classification is related to the choice of PCs. The highest sex classification rate is 91.43% using our method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Cysts of the oro-facial region: A Nigerian experience

    PubMed Central

    Lawal, AO; Adisa, AO; Sigbeku, OF

    2012-01-01

    Aim: Though many studies have examined cysts of the jaws, most of them focused on a group of cysts and only few have examined cysts based on a particular classification. The aim of this study is to review cysts of the oro-facial region seen at a tertiary health centre in Ibadan and to categorize these cases based on Lucas, Killey and Kay and WHO classifications. Materials and Methods: All histologically diagnosed oro-facial cysts were retrieved from the oral pathology archives. Information concerning cyst type, topography, age at time of diagnosis and gender of patients was gathered. Data obtained was analyzed with the SPSS 18.0.1 version software. Results: A total of 92 histologically diagnosed oro-facial cysts comprising 60 (65.2%) males and 32 (34.8%) females were seen. The age range was 4 to 73 years with a mean age of 27.99 ± 15.26 years. The peak incidence was in the third decade. The mandible/ maxilla ratio was 1.5:1. Apical periodontal was the most common type of cyst accounting for 50% (n = 46) of total cysts observed. Using the WHO classification, cysts of the soft tissues of head, face and neck were overwhelmingly more common in males than females with a ratio of 14:3, while non-epithelial cysts occurred at a 3:1 male/female ratio. Conclusion: This study showed similar findings in regard to type, site and age incidence of oro-facial cysts compared to previous studies and also showed that the WHO classification protocol was the most comprehensive classification method for oro-facial cysts. PMID:22923885

  20. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  1. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.

    PubMed

    Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo

    2015-06-01

    The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.

  2. Physical therapy for facial paralysis: a tailored treatment approach.

    PubMed

    Brach, J S; VanSwearingen, J M

    1999-04-01

    Bell palsy is an acute facial paralysis of unknown etiology. Although recovery from Bell palsy is expected without intervention, clinical experience suggests that recovery is often incomplete. This case report describes a classification system used to guide treatment and to monitor recovery of an individual with facial paralysis. The patient was a 71-year-old woman with complete left facial paralysis secondary to Bell palsy. Signs and symptoms were assessed using a standardized measure of facial impairment (Facial Grading System [FGS]) and questions regarding functional limitations. A treatment-based category was assigned based on signs and symptoms. Rehabilitation involved muscle re-education exercises tailored to the treatment-based category. In 14 physical therapy sessions over 13 months, the patient had improved facial impairments (initial FGS score= 17/100, final FGS score= 68/100) and no reported functional limitations. Recovery from Bell palsy can be a complicated and lengthy process. The use of a classification system may help simplify the rehabilitation process.

  3. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  4. PubMed Central

    FARRI, A.; ENRICO, A.; FARRI, F.

    2012-01-01

    SUMMARY In 1988, diagnostic criteria for headaches were drawn up by the International Headache Society (IHS) and is divided into headaches, cranial neuralgias and facial pain. The 2nd edition of the International Classification of Headache Disorders (ICHD) was produced in 2004, and still provides a dynamic and useful instrument for clinical practice. We have examined the current IHC, which comprises 14 groups. The first four cover primary headaches, with "benign paroxysmal vertigo of childhood" being the forms of migraine of interest to otolaryngologists; groups 5 to 12 classify "secondary headaches"; group 11 is formed of "headache or facial pain attributed to disorder of cranium, neck, eyes, ears, nose, sinuses, teeth, mouth or other facial or cranial structures"; group 13, consisting of "cranial neuralgias and central causes of facial pain" is also of relevance to otolaryngology. Neither the current classification system nor the original one has a satisfactory collocation for migraineassociated vertigo. Another critical point of the classification concerns cranio-facial pain syndromes such as Sluder's neuralgia, previously included in the 1988 classification among cluster headaches, and now included in the section on "cranial neuralgias and central causes of facial pain", even though Sluder's neuralgia has not been adequately validated. As we have highlighted in our studies, there are considerable similarities between Sluder's syndrome and cluster headaches. The main features distinguishing the two are the trend to cluster over time, found only in cluster headaches, and the distribution of pain, with greater nasal manifestations in the case of Sluder's syndrome. We believe that it is better and clearer, particularly on the basis of our clinical experience and published studies, to include this nosological entity, which is clearly distinct from an otolaryngological point of view, as a variant of cluster headache. We agree with experts in the field of headaches, such as Olesen and Nappi who contributed to previous classifications, on the need for a revised classification, particularly with regards to secondary headaches. According to the current Committee on headaches, the updated version of the classification, presently under study, is due to be published soon; it is our hope that this revised version will take into account some of the above considerations. PMID:22767967

  5. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  6. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  7. 15 years of research on Oral-Facial-Digital syndromes: from 1 to 16 causal genes

    PubMed Central

    Bruel, Ange-Line; Franco, Brunella; Duffourd, Yannis; Thevenon, Julien; Jego, Laurence; Lopez, Estelle; Deleuze, Jean-François; Doummar, Diane; Giles, Rachel H.; Johnson, Colin A.; Huynen, Martijn A.; Chevrier, Véronique; Burglen, Lydie; Morleo, Manuela; Desguerres, Isabelle; Pierquin, Geneviève; Doray, Bérénice; Gilbert-Dussardier, Brigitte; Reversade, Bruno; Steichen-Gersdorf, Elisabeth; Baumann, Clarisse; Panigrahi, Inusha; Fargeot-Espaliat, Anne; Dieux, Anne; David, Albert; Goldenberg, Alice; Bongers, Ernie; Gaillard, Dominique; Argente, Jesús; Aral, Bernard; Gigot, Nadège; St-Onge, Judith; Birnbaum, Daniel; Phadke, Shubha R.; Cormier-Daire, Valérie; Eguether, Thibaut; Pazour, Gregory J.; Herranz-Pérez, Vicente; Lee, Jaclyn S.; Pasquier, Laurent; Loget, Philippe; Saunier, Sophie; Mégarbané, André; Rosnet, Olivier; Leroux, Michel R.; Wallingford, John B.; Blacque, Oliver E.; Nachury, Maxence V.; Attie-Bitach, Tania; Rivière, Jean-Baptiste; Faivre, Laurence; Thauvin-Robinet, Christel

    2017-01-01

    Oral-facial-digital syndromes (OFDS) gather rare genetic disorders characterized by facial, oral and digital abnormalities associated with a wide range of additional features (polycystic kidney disease, cerebral malformations and several others) to delineate a growing list of OFD subtypes. The most frequent, OFD type I, is caused by a heterozygous mutation in the OFD1 gene encoding a centrosomal protein. The wide clinical heterogeneity of OFDS suggests the involvement of other ciliary genes. For 15 years, we have aimed to identify the molecular bases of OFDS. This effort has been greatly helped by the recent development of whole exome sequencing (WES). Here, we present all our published and unpublished results for WES in 24 OFDS cases. We identified causal variants in five new genes (C2CD3, TMEM107, INTU, KIAA0753, IFT57) and related the clinical spectrum of four genes in other ciliopathies (C5orf42, TMEM138, TMEM231, WDPCP) to OFDS. Mutations were also detected in two genes previously implicated in OFDS. Functional studies revealed the involvement of centriole elongation, transition zone and intraflagellar transport defects in OFDS, thus characterizing three ciliary protein modules: the complex KIAA0753-FOPNL-OFD1, a regulator of centriole elongation; the MKS module, a major component of the transition zone; and the CPLANE complex necessary for IFT-A assembly. OFDS now appear to be a distinct subgroup of ciliopathies with wide heterogeneity, which makes the initial classification obsolete. A clinical classification restricted to the three frequent/well-delineated subtypes could be proposed, and for patients who do not fit one of these 3 main subtypes, a further classification could be based on the genotype. PMID:28289185

  8. An introductory analysis of digital infrared thermal imaging guided oral cancer detection using multiresolution rotation invariant texture features

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Das Gupta, R.; Mukhopadhyay, S.; Anjum, N.; Patsa, S.; Ray, J. G.

    2017-03-01

    This manuscript presents an analytical treatment on the feasibility of multi-scale Gabor filter bank response for non-invasive oral cancer pre-screening and detection in the long infrared spectrum. Incapability of present healthcare technology to detect oral cancer in budding stage manifests in high mortality rate. The paper contributes a step towards automation in non-invasive computer-aided oral cancer detection using an amalgamation of image processing and machine intelligence paradigms. Previous works have shown the discriminative difference of facial temperature distribution between a normal subject and a patient. The proposed work, for the first time, exploits this difference further by representing the facial Region of Interest(ROI) using multiscale rotation invariant Gabor filter bank responses followed by classification using Radial Basis Function(RBF) kernelized Support Vector Machine(SVM). The proposed study reveals an initial increase in classification accuracy with incrementing image scales followed by degradation of performance; an indication that addition of more and more finer scales tend to embed noisy information instead of discriminative texture patterns. Moreover, the performance is consistently better for filter responses from profile faces compared to frontal faces.This is primarily attributed to the ineptness of Gabor kernels to analyze low spatial frequency components over a small facial surface area. On our dataset comprising of 81 malignant, 59 pre-cancerous, and 63 normal subjects, we achieve state-of-the-art accuracy of 85.16% for normal v/s precancerous and 84.72% for normal v/s malignant classification. This sets a benchmark for further investigation of multiscale feature extraction paradigms in IR spectrum for oral cancer detection.

  9. Towards a new taxonomy of idiopathic orofacial pain.

    PubMed

    Woda, Alain; Tubert-Jeannin, Stéphanie; Bouhassira, Didier; Attal, Nadine; Fleiter, Bernard; Goulet, Jean-Paul; Gremeau-Richard, Christelle; Navez, Marie Louise; Picard, Pascale; Pionchon, Paul; Albuisson, Eliane

    2005-08-01

    There is no current consensus on the taxonomy of the different forms of idiopathic orofacial pain (stomatodynia, atypical odontalgia, atypical facial pain, facial arthromyalgia), which are sometimes considered as separate entities and sometimes grouped together. In the present prospective multicentric study, we used a systematic approach to help to place these different painful syndromes in the general classification of chronic facial pain. This multicenter study was carried out on 245 consecutive patients presenting with chronic facial pain (>4 months duration). Each patient was seen by two experts who proposed a diagnosis, administered a 111-item questionnaire and filled out a standardized 68-item examination form. Statistical processing included univariate analysis and several forms of multidimensional analysis. Migraines (n=37), tension-type headache (n=26), post-traumatic neuralgia (n=20) and trigeminal neuralgia (n=13) tended to cluster independently. When signs and symptoms describing topographic features were not included in the list of variables, the idiopathic orofacial pain patients tended to cluster in a single group. Inside this large cluster, only stomatodynia (n=42) emerged as a distinct homogenous subgroup. In contrast, facial arthromyalgia (n=46) and an entity formed with atypical facial pain (n=25) and atypical odontalgia (n=13) could only be individualised by variables reflecting topographical characteristics. These data provide grounds for an evidence-based classification of idiopathic facial pain entities and indicate that the current sub-classification of these syndromes relies primarily on the topography of the symptoms.

  10. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  11. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    PubMed

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  12. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity

    PubMed Central

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882

  13. Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine

    PubMed Central

    Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang

    2014-01-01

    Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342

  14. [Study on clinical effectiveness of acupuncture and moxibustion on acute Bell's facial paralysis: randomized controlled clinical observation].

    PubMed

    Wu, Bin; Li, Ning; Liu, Yi; Huang, Chang-qiong; Zhang, Yong-ling

    2006-03-01

    To investigate the adverse effects of acupuncture on the prognosis, and effectiveness of acupuncture combined with far infrared ray in the patient of acute Bell's facial paralysis within 48 h. Clinically randomized controlled trial was used, and the patients were divided into 3 groups: group A (early acupuncture group), group B (acupuncture combined with far infrared ray) and group C (acupuncture after 7 days). The facial nerve functional classification at the attack, 7 days after the attack and after treatment, the clinically cured rate of following-up of 6 months, and the average cured time, the cured time of complete facial paralysis were observed in the 3 groups. There were no significant differences among the 3 groups in the facial nerve functional classification 7 days after the attack, the clinically cured rate of following-up of 6 months and the average cured time (P > 0.05), but the cured time of complete facial paralysis in the group A and the group B were shorter than that in the group C (P < 0.05). The patient of acute Bell's facial paralysis can be treated with acupuncture and moxibustion, and traditional moxibustion can be replaced by far infrared way.

  15. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  16. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  17. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  18. Automatic detection of confusion in elderly users of a web-based health instruction video.

    PubMed

    Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek

    2015-06-01

    Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.

  19. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach

    PubMed Central

    Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi

    2017-01-01

    While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979

  20. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  1. Real-time face and gesture analysis for human-robot interaction

    NASA Astrophysics Data System (ADS)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  2. Morphologic evaluation and classification of facial asymmetry using 3-dimensional computed tomography.

    PubMed

    Baek, Chaehwan; Paeng, Jun-Young; Lee, Janice S; Hong, Jongrak

    2012-05-01

    A systematic classification is needed for the diagnosis and surgical treatment of facial asymmetry. The purposes of this study were to analyze the skeletal structures of patients with facial asymmetry and to objectively classify these patients into groups according to these structural characteristics. Patients with facial asymmetry and recent computed tomographic images from 2005 through 2009 were included in this study, which was approved by the institutional review board. Linear measurements, angles, and reference planes on 3-dimensional computed tomograms were obtained, including maxillary (upper midline deviation, maxilla canting, and arch form discrepancy) and mandibular (menton deviation, gonion to midsagittal plane, ramus height, and frontal ramus inclination) measurements. All measurements were analyzed using paired t tests with Bonferroni correction followed by K-means cluster analysis using SPSS 13.0 to determine an objective classification of facial asymmetry in the enrolled patients. Kruskal-Wallis test was performed to verify differences among clustered groups. P < .05 was considered statistically significant. Forty-three patients (18 male, 25 female) were included in the study. They were classified into 4 groups based on cluster analysis. Their mean age was 24.3 ± 4.4 years. Group 1 included subjects (44% of patients) with asymmetry caused by a shift or lateralization of the mandibular body. Group 2 included subjects (39%) with a significant difference between the left and right ramus height with menton deviation to the short side. Group 3 included subjects (12%) with atypical asymmetry, including deviation of the menton to the short side, prominence of the angle/gonion on the larger side, and reverse maxillary canting. Group 4 included subjects (5%) with severe maxillary canting, ramus height differences, and menton deviation to the short side. In this study, patients with asymmetry were classified into 4 statistically distinct groups according to their anatomic features. This diagnostic classification method will assist in treatment planning for patients with facial asymmetry and may be used to explore the etiology of these variants of facial asymmetry. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  4. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  5. A study of patient facial expressivity in relation to orthodontic/surgical treatment.

    PubMed

    Nafziger, Y J

    1994-09-01

    A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.

  6. Fifteen years of research on oral-facial-digital syndromes: from 1 to 16 causal genes.

    PubMed

    Bruel, Ange-Line; Franco, Brunella; Duffourd, Yannis; Thevenon, Julien; Jego, Laurence; Lopez, Estelle; Deleuze, Jean-François; Doummar, Diane; Giles, Rachel H; Johnson, Colin A; Huynen, Martijn A; Chevrier, Véronique; Burglen, Lydie; Morleo, Manuela; Desguerres, Isabelle; Pierquin, Geneviève; Doray, Bérénice; Gilbert-Dussardier, Brigitte; Reversade, Bruno; Steichen-Gersdorf, Elisabeth; Baumann, Clarisse; Panigrahi, Inusha; Fargeot-Espaliat, Anne; Dieux, Anne; David, Albert; Goldenberg, Alice; Bongers, Ernie; Gaillard, Dominique; Argente, Jesús; Aral, Bernard; Gigot, Nadège; St-Onge, Judith; Birnbaum, Daniel; Phadke, Shubha R; Cormier-Daire, Valérie; Eguether, Thibaut; Pazour, Gregory J; Herranz-Pérez, Vicente; Goldstein, Jaclyn S; Pasquier, Laurent; Loget, Philippe; Saunier, Sophie; Mégarbané, André; Rosnet, Olivier; Leroux, Michel R; Wallingford, John B; Blacque, Oliver E; Nachury, Maxence V; Attie-Bitach, Tania; Rivière, Jean-Baptiste; Faivre, Laurence; Thauvin-Robinet, Christel

    2017-06-01

    Oral-facial-digital syndromes (OFDS) gather rare genetic disorders characterised by facial, oral and digital abnormalities associated with a wide range of additional features (polycystic kidney disease, cerebral malformations and several others) to delineate a growing list of OFDS subtypes. The most frequent, OFD type I, is caused by a heterozygous mutation in the OFD1 gene encoding a centrosomal protein. The wide clinical heterogeneity of OFDS suggests the involvement of other ciliary genes. For 15 years, we have aimed to identify the molecular bases of OFDS. This effort has been greatly helped by the recent development of whole-exome sequencing (WES). Here, we present all our published and unpublished results for WES in 24 cases with OFDS. We identified causal variants in five new genes ( C2CD3 , TMEM107 , INTU , KIAA0753 and IFT57 ) and related the clinical spectrum of four genes in other ciliopathies ( C5orf42 , TMEM138 , TMEM231 and WDPCP ) to OFDS. Mutations were also detected in two genes previously implicated in OFDS. Functional studies revealed the involvement of centriole elongation, transition zone and intraflagellar transport defects in OFDS, thus characterising three ciliary protein modules: the complex KIAA0753-FOPNL-OFD1, a regulator of centriole elongation; the Meckel-Gruber syndrome module, a major component of the transition zone; and the CPLANE complex necessary for IFT-A assembly. OFDS now appear to be a distinct subgroup of ciliopathies with wide heterogeneity, which makes the initial classification obsolete. A clinical classification restricted to the three frequent/well-delineated subtypes could be proposed, and for patients who do not fit one of these three main subtypes, a further classification could be based on the genotype. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Comprehensive evaluation of functional and anatomical disorders of the patients with distal occlusion and accompanying obstructive sleep apnea syndrome

    NASA Astrophysics Data System (ADS)

    Nabiev, F. H.; Dobrodeev, A. S.; Libin, P. V.; Kotov, I. I.; Ovsyannikov, A. G.

    2015-11-01

    The paper defines the therapeutic and rehabilitation approach to the patients with Angle's classification Class II dento-facial anomalies, accompanied by obstructive sleep apnea (OSA). The proposed comprehensive approach to the diagnostics and treatment of patients with posterior occlusion, accompanied by OSA, allows for objective evaluation of intensity of a dento-facial anomaly and accompanying respiratory disorders in the nasal and oral pharynx, which allows for the pathophysiological mechanisms of OSA to be identified, and an optimal plan for surgical procedures to be developed. The proposed comprehensive approach to the diagnostics and treatment of patients with Angle's classification Class II dento-facial anomalies provides high functional and aesthetic results.

  8. What does magnetic resonance imaging add to the prenatal ultrasound diagnosis of facial clefts?

    PubMed

    Mailáth-Pokorny, M; Worda, C; Krampl-Bettelheim, E; Watzinger, F; Brugger, P C; Prayer, D

    2010-10-01

    Ultrasound is the modality of choice for prenatal detection of cleft lip and palate. Because its accuracy in detecting facial clefts, especially isolated clefts of the secondary palate, can be limited, magnetic resonance imaging (MRI) is used as an additional method for assessing the fetus. The aim of this study was to investigate the role of fetal MRI in the prenatal diagnosis of facial clefts. Thirty-four pregnant women with a mean gestational age of 26 (range, 19-34) weeks underwent in utero MRI, after ultrasound examination had identified either a facial cleft (n = 29) or another suspected malformation (micrognathia (n = 1), cardiac defect (n = 1), brain anomaly (n = 2) or diaphragmatic hernia (n = 1)). The facial cleft was classified postnatally and the diagnoses were compared with the previous ultrasound findings. There were 11 (32.4%) cases with cleft of the primary palate alone, 20 (58.8%) clefts of the primary and secondary palate and three (8.8%) isolated clefts of the secondary palate. In all cases the primary and secondary palate were visualized successfully with MRI. Ultrasound imaging could not detect five (14.7%) facial clefts and misclassified 15 (44.1%) facial clefts. The MRI classification correlated with the postnatal/postmortem diagnosis. In our hands MRI allows detailed prenatal evaluation of the primary and secondary palate. By demonstrating involvement of the palate, MRI provides better detection and classification of facial clefts than does ultrasound alone. Copyright © 2010 ISUOG. Published by John Wiley & Sons, Ltd.

  9. Image Classification for Web Genre Identification

    DTIC Science & Technology

    2012-01-01

    recognition and landscape detection using the computer vision toolkit OpenCV1. For facial recognition , we researched the possibilities of using the...method for connecting these names with a face/personal photo and logo respectively. [2] METHODOLOGY For this project, we focused primarily on facial

  10. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  11. Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.

    PubMed

    Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C

    2017-11-01

    Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.

  12. An integrated telemedicine platform for the assessment of affective physiological states

    PubMed Central

    Katsis, Christos D; Ganiatsas, George; Fotiadis, Dimitrios I

    2006-01-01

    AUBADE is an integrated platform built for the affective assessment of individuals. The system performs evaluation of the emotional state by classifying vectors of features extracted from: facial Electromyogram, Respiration, Electrodermal Activity and Electrocardiogram. The AUBADE system consists of: (a) a multisensorial wearable, (b) a data acquisition and wireless communication module, (c) a feature extraction module, (d) a 3D facial animation module which is used for the projection of the obtained data through a generic 3D face model; whereas the end-user will be able to view the facial expression of the subject in real time, (e) an intelligent emotion recognition module, and (f) the AUBADE databases where the acquired signals along with the subject's animation videos are saved. The system is designed to be applied to human subjects operating under extreme stress conditions, in particular car racing drivers, and also to patients suffering from neurological and psychological disorders. AUBADE's classification accuracy into five predefined emotional classes (high stress, low stress, disappointment, euphoria and neutral face) is 86.0%. The pilot system applications and components are being tested and evaluated on Maserati's car. racing drivers. PMID:16879757

  13. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  14. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    PubMed

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  15. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    NASA Astrophysics Data System (ADS)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  16. Classification of Computer-Aided Design-Computer-Aided Manufacturing Applications for the Reconstruction of Cranio-Maxillo-Facial Defects.

    PubMed

    Wauters, Lauri D J; Miguel-Moragas, Joan San; Mommaerts, Maurice Y

    2015-11-01

    To gain insight into the methodology of different computer-aided design-computer-aided manufacturing (CAD-CAM) applications for the reconstruction of cranio-maxillo-facial (CMF) defects. We reviewed and analyzed the available literature pertaining to CAD-CAM for use in CMF reconstruction. We proposed a classification system of the techniques of implant and cutting, drilling, and/or guiding template design and manufacturing. The system consisted of 4 classes (I-IV). These classes combine techniques used for both the implant and template to most accurately describe the methodology used. Our classification system can be widely applied. It should facilitate communication and immediate understanding of the methodology of CAD-CAM applications for the reconstruction of CMF defects.

  17. Hereditary family signature of facial expression

    PubMed Central

    Peleg, Gili; Katzir, Gadi; Peleg, Ofer; Kamara, Michal; Brodsky, Leonid; Hel-Or, Hagit; Keren, Daniel; Nevo, Eviatar

    2006-01-01

    Although facial expressions of emotion are universal, individual differences create a facial expression “signature” for each person; but, is there a unique family facial expression signature? Only a few family studies on the heredity of facial expressions have been performed, none of which compared the gestalt of movements in various emotional states; they compared only a few movements in one or two emotional states. No studies, to our knowledge, have compared movements of congenitally blind subjects with their relatives to our knowledge. Using two types of analyses, we show a correlation between movements of congenitally blind subjects with those of their relatives in think-concentrate, sadness, anger, disgust, joy, and surprise and provide evidence for a unique family facial expression signature. In the analysis “in-out family test,” a particular movement was compared each time across subjects. Results show that the frequency of occurrence of a movement of a congenitally blind subject in his family is significantly higher than that outside of his family in think-concentrate, sadness, and anger. In the analysis “the classification test,” in which congenitally blind subjects were classified to their families according to the gestalt of movements, results show 80% correct classification over the entire interview and 75% in anger. Analysis of the movements' frequencies in anger revealed a correlation between the movements' frequencies of congenitally blind individuals and those of their relatives. This study anticipates discovering genes that influence facial expressions, understanding their evolutionary significance, and elucidating repair mechanisms for syndromes lacking facial expression, such as autism. PMID:17043232

  18. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  19. Face-selective regions differ in their ability to classify facial expressions

    PubMed Central

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-01-01

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: The amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  20. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  1. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  2. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  3. Art critic: Multisignal vision and speech interaction system in a gaming context.

    PubMed

    Reale, Michael J; Liu, Peng; Yin, Lijun; Canavan, Shaun

    2013-12-01

    True immersion of a player within a game can only occur when the world simulated looks and behaves as close to reality as possible. This implies that the game must correctly read and understand, among other things, the player's focus, attitude toward the objects/persons in focus, gestures, and speech. In this paper, we proposed a novel system that integrates eye gaze estimation, head pose estimation, facial expression recognition, speech recognition, and text-to-speech components for use in real-time games. Both the eye gaze and head pose components utilize underlying 3-D models, and our novel head pose estimation algorithm uniquely combines scene flow with a generic head model. The facial expression recognition module uses the local binary patterns with three orthogonal planes approach on the 2-D shape index domain rather than the pixel domain, resulting in improved classification. Our system has also been extended to use a pan-tilt-zoom camera driven by the Kinect, allowing us to track a moving player. A test game, Art Critic, is also presented, which not only demonstrates the utility of our system but also provides a template for player/non-player character (NPC) interaction in a gaming context. The player alters his/her view of the 3-D world using head pose, looks at paintings/NPCs using eye gaze, and makes an evaluation based on the player's expression and speech. The NPC artist will respond with facial expression and synthetic speech based on its personality. Both qualitative and quantitative evaluations of the system are performed to illustrate the system's effectiveness.

  4. Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2012-01-01

    Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597

  5. Branches of the Facial Artery.

    PubMed

    Hwang, Kun; Lee, Geun In; Park, Hye Jin

    2015-06-01

    The aim of this study is to review the name of the branches, to review the classification of the branching pattern, and to clarify a presence percentage of each branch of the facial artery, systematically. In a PubMed search, the search terms "facial," AND "artery," AND "classification OR variant OR pattern" were used. The IBM SPSS Statistics 20 system was used for statistical analysis. Among the 500 titles, 18 articles were selected and reviewed systematically. Most of the articles focused on "classification" according to the "terminal branch." Several authors classified the facial artery according to their terminal branches. Most of them, however, did not describe the definition of "terminal branch." There were confusions within the classifications. When the inferior labial artery was absent, 3 different types were used. The "alar branch" or "nasal branch" was used instead of the "lateral nasal branch." The angular branch was used to refer to several different branches. The presence as a percentage of each branch according to the branches in Gray's Anatomy (premasseteric, inferior labial, superior labial, lateral nasal, and angular) varied. No branch was used with 100% consistency. The superior labial branch was most frequently cited (95.7%, 382 arteries in 399 hemifaces). The angular branch (53.9%, 219 arteries in 406 hemifaces) and the premasseteric branch were least frequently cited (53.8%, 43 arteries in 80 hemifaces). There were significant differences among each of the 5 branches (P < 0.05) except between the angular branch and the premasseteric branch and between the superior labial branch and the inferior labial branch. The authors believe identifying the presence percentage of each branch will be helpful for surgical procedures.

  6. Single trial classification for the categories of perceived emotional facial expressions: an event-related fMRI study

    NASA Astrophysics Data System (ADS)

    Song, Sutao; Huang, Yuxia; Long, Zhiying; Zhang, Jiacai; Chen, Gongxiang; Wang, Shuqing

    2016-03-01

    Recently, several studies have successfully applied multivariate pattern analysis methods to predict the categories of emotions. These studies are mainly focused on self-experienced emotions, such as the emotional states elicited by music or movie. In fact, most of our social interactions involve perception of emotional information from the expressions of other people, and it is an important basic skill for humans to recognize the emotional facial expressions of other people in a short time. In this study, we aimed to determine the discriminability of perceived emotional facial expressions. In a rapid event-related fMRI design, subjects were instructed to classify four categories of facial expressions (happy, disgust, angry and neutral) by pressing different buttons, and each facial expression stimulus lasted for 2s. All participants performed 5 fMRI runs. One multivariate pattern analysis method, support vector machine was trained to predict the categories of facial expressions. For feature selection, ninety masks defined from anatomical automatic labeling (AAL) atlas were firstly generated and each were treated as the input of the classifier; then, the most stable AAL areas were selected according to prediction accuracies, and comprised the final feature sets. Results showed that: for the 6 pair-wise classification conditions, the accuracy, sensitivity and specificity were all above chance prediction, among which, happy vs. neutral , angry vs. disgust achieved the lowest results. These results suggested that specific neural signatures of perceived emotional facial expressions may exist, and happy vs. neutral, angry vs. disgust might be more similar in information representation in the brain.

  7. Facial clefts and facial dysplasia: revisiting the classification.

    PubMed

    Mazzola, Riccardo F; Mazzola, Isabella C

    2014-01-01

    Most craniofacial malformations are identified by their appearance. The majority of the classification systems are mainly clinical or anatomical, not related to the different levels of development of the malformation, and underlying pathology is usually not taken into consideration. In 1976, Tessier first emphasized the relationship between soft tissues and the underlying bone stating that "a fissure of the soft tissue corresponds, as a general rule, with a cleft of the bony structure". He introduced a cleft numbering system around the orbit from 0 to 14 depending on its relationship to the zero line (ie, the vertical midline cleft of the face). The classification, easy to understand, became widely accepted because the recording of the malformations was simple and communication between observers facilitated. It represented a great breakthrough in identifying craniofacial malformations, named clefts by him. In the present paper, the embryological-based classification of craniofacial malformations, proposed in 1983 and in 1990 by us, has been revisited. Its aim was to clarify some unanswered questions regarding apparently atypical or bizarre anomalies and to establish as much as possible the moment when this event occurred. In our opinion, this classification system may well integrate the one proposed by Tessier and tries at the same time to find a correlation between clinical observation and morphogenesis.Terminology is important. The overused term cleft should be reserved to true clefts only, developed from disturbances in the union of the embryonic facial processes, between the lateronasal and maxillary process (or oro-naso-ocular cleft); between the medionasal and maxillary process (or cleft of the lip); between the maxillary processes (or cleft of the palate); and between the maxillary and mandibular process (or macrostomia).For the other types of defects, derived from alteration of bone production centers, the word dysplasia should be used instead. Facial dysplasias have been ranged in a helix form and named after the site of the developmental arrest. Thus, an internasal, nasal, nasomaxillary, maxillary and malar dysplasia, depending on the involved area, have been identified.The classification may provide a useful guide in better understanding the morphogenesis of rare craniofacial malformations.

  8. Facial soft tissue thickness differences among three skeletal classes in Japanese population.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Kibayashi, Kazuhiko

    2014-03-01

    Facial reconstruction is used in forensic anthropology to recreate the face from unknown human skeletal remains, and to elucidate the antemortem facial appearance. This requires accurate assessment of the skull (age, sex, ancestry, etc.) and thickness data. However, additional information is required to reconstruct the face as the information obtained from the skull is limited. Here, we aimed to examine the information from the skull that is required for accurate facial reconstruction. The human facial profile is classified into 3 shapes: straight, convex, and concave. These facial profiles facilitate recognition of individuals. The skeletal classes used in orthodontics are classified according to these 3 facial types. We have previously reported the differences between Japanese females. In the present study, we applied this classification for facial tissue measurement, compared the differences in tissue depth of each skeletal class for both sexes in the Japanese population, and elucidated the differences between the skeletal classes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Emotional facial activation induced by unconsciously perceived dynamic facial expressions.

    PubMed

    Kaiser, Jakob; Davey, Graham C L; Parkhouse, Thomas; Meeres, Jennifer; Scott, Ryan B

    2016-12-01

    Do facial expressions of emotion influence us when not consciously perceived? Methods to investigate this question have typically relied on brief presentation of static images. In contrast, real facial expressions are dynamic and unfold over several seconds. Recent studies demonstrate that gaze contingent crowding (GCC) can block awareness of dynamic expressions while still inducing behavioural priming effects. The current experiment tested for the first time whether dynamic facial expressions presented using this method can induce unconscious facial activation. Videos of dynamic happy and angry expressions were presented outside participants' conscious awareness while EMG measurements captured activation of the zygomaticus major (active when smiling) and the corrugator supercilii (active when frowning). Forced-choice classification of expressions confirmed they were not consciously perceived, while EMG revealed significant differential activation of facial muscles consistent with the expressions presented. This successful demonstration opens new avenues for research examining the unconscious emotional influences of facial expressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Brief Report: Is Impaired Classification of Subtle Facial Expressions in Children with Autism Spectrum Disorders Related to Atypical Emotion Category Boundaries?

    ERIC Educational Resources Information Center

    Whitaker, Lydia R.; Simpson, Andrew; Roberson, Debi

    2017-01-01

    Impairments in recognizing subtle facial expressions, in individuals with autism spectrum disorder (ASD), may relate to difficulties in constructing prototypes of these expressions. Eighteen children with predominantly intellectual low-functioning ASD (LFA, IQ <80) and two control groups (mental and chronological age matched), were assessed for…

  11. Extreme Facial Expressions Classification Based on Reality Parameters

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  12. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  13. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    PubMed

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our algorithm and relates them to Action Units that have been associated with pain expression. We conclude the paper by demonstrating that MS-MIL yields a significant improvement on another spontaneous facial expression dataset, the FEEDTUM dataset.

  14. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography

    PubMed Central

    Scherer, Klaus R.; Schuller, Björn W.

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks—novelty, intrinsic pleasantness, goal conduciveness, control, and power—in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions. PMID:29293572

  15. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography.

    PubMed

    Coutinho, Eduardo; Gentsch, Kornelia; van Peer, Jacobien; Scherer, Klaus R; Schuller, Björn W

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks-novelty, intrinsic pleasantness, goal conduciveness, control, and power-in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of cognitive processes involved in emotions.

  16. Complications in Pediatric Facial Fractures

    PubMed Central

    Chao, Mimi T.; Losee, Joseph E.

    2009-01-01

    Despite recent advances in the diagnosis, treatment, and prevention of pediatric facial fractures, little has been published on the complications of these fractures. The existing literature is highly variable regarding both the definition and the reporting of adverse events. Although the incidence of pediatric facial fractures is relative low, they are strongly associated with other serious injuries. Both the fractures and their treatment may have long-term consequence on growth and development of the immature face. This article is a selective review of the literature on facial fracture complications with special emphasis on the complications unique to pediatric patients. We also present our classification system to evaluate adverse outcomes associated with pediatric facial fractures. Prospective, long-term studies are needed to fully understand and appreciate the complexity of treating children with facial fractures and determining the true incidence, subsequent growth, and nature of their complications. PMID:22110803

  17. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  18. The face-selective N170 component is modulated by facial color.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki

    2012-08-01

    Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. On the Suitability of Mobile Cloud Computing at the Tactical Edge

    DTIC Science & Technology

    2014-04-23

    geolocation; Facial recognition (photo identification/classification); Intelligence, Surveillance, and Reconnaissance (ISR); and Fusion of Electronic...could benefit most from MCC are those with large processing overhead, low bandwidth requirements, and a need for large database support (e.g., facial ... recognition , language translation). The effect—specifically on the communication links—of supporting these applications at the tactical edge

  20. Shades of Emotion: What the Addition of Sunglasses or Masks to Faces Reveals about the Development of Facial Expression Processing

    ERIC Educational Resources Information Center

    Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa

    2012-01-01

    Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…

  1. The assessment of facial variation in 4747 British school children.

    PubMed

    Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen

    2012-12-01

    The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.

  2. How components of facial width to height ratio differently contribute to the perception of social traits

    PubMed Central

    Lio, Guillaume; Gomez, Alice; Sirigu, Angela

    2017-01-01

    Facial width to height ratio (fWHR) is a morphological cue that correlates with sexual dimorphism and social traits. Currently, it is unclear how vertical and horizontal components of fWHR, distinctly capture faces’ social information. Using a new methodology, we orthogonally manipulated the upper facial height and the bizygomatic width to test their selective effect in the formation of impressions. Subjects (n = 90) saw pair of faces and had to select the face expressing better different social traits (trustworthiness, aggressiveness and femininity). We further investigated how sex and fWHR components interact in the formation of these judgements. Across experiments, changes along the vertical component better predicted participants' ratings rather than the horizontal component. Faces with smaller height were perceived as less trustworthy, less feminine and more aggressive. By dissociating fWHR and testing the contribution of its components independently, we obtained a powerful and discriminative measure of how facial morphology guides social judgements. PMID:28235081

  3. Convolutional neural networks with balanced batches for facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  4. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  5. Folliculotropism in pigmented facial macules: Differential diagnosis with reflectance confocal microscopy.

    PubMed

    Persechino, Flavia; De Carvalho, Nathalie; Ciardo, Silvana; De Pace, Barbara; Casari, Alice; Chester, Johanna; Kaleci, Shaniko; Stanganelli, Ignazio; Longo, Caterina; Farnetani, Francesca; Pellacani, Giovanni

    2018-03-01

    Pigmented facial macules are common on sun damage skin. The diagnosis of early stage lentigo maligna (LM) and lentigo maligna melanoma (LMM) is challenging. Reflectance confocal microscopy (RCM) has been proven to increase diagnostic accuracy of facial lesions. A total of 154 pigmented facial macules, retrospectively collected, were evaluated for the presence of already-described RCM features and new parameters depicting aspects of the follicle. Melanocytic nests, roundish pagetoid cells, follicular infiltration, bulgings from the follicles and many bright dendrites and infiltration of the hair follicle (ie, folliculotropism) were found to be indicative of LM/LMM compared to non-melanocytic skin neoplasms (NMSNs), with an overall sensitivity of 96% and specificity of 83%. Concerning NMSNs, solar lentigo and lichen planus-like keratosis resulted better distinguishable from LM/LMM because usually lacking malignant features and presenting characteristic diagnostic parameters, such as epidermal cobblestone pattern and polycyclic papillary contours. On the other hand, distinction of pigmented actinic keratosis (PAK) resulted more difficult, and needing evaluation of hair follicle infiltration and bulging structures, due to the frequent observation of few bright dendrites in the epidermis, but predominantly not infiltrating the hair follicle (estimated specificity for PAK 53%). A detailed evaluation of the components of the folliculotropism may help to improve the diagnostic accuracy. The classification of the type, distribution and amount of cells, and the presence of bulging around the follicles seem to represent important tools for the differentiation between PAK and LM/LMM at RCM analysis. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  7. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  8. Development and validation of a facial expression database based on the dimensional and categorical model of emotions.

    PubMed

    Fujimura, Tomomi; Umemura, Hiroyuki

    2018-01-15

    The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .

  9. Signatures of personality on dense 3D facial images.

    PubMed

    Hu, Sile; Xiong, Jieyi; Fu, Pengcheng; Qiao, Lu; Tan, Jingze; Jin, Li; Tang, Kun

    2017-03-06

    It has long been speculated that cues on the human face exist that allow observers to make reliable judgments of others' personality traits. However, direct evidence of association between facial shapes and personality is missing from the current literature. This study assessed the personality attributes of 834 Han Chinese volunteers (405 males and 429 females), utilising the five-factor personality model ('Big Five'), and collected their neutral 3D facial images. Dense anatomical correspondence was established across the 3D facial images in order to allow high-dimensional quantitative analyses of the facial phenotypes. In this paper, we developed a Partial Least Squares (PLS) -based method. We used composite partial least squares component (CPSLC) to test association between the self-tested personality scores and the dense 3D facial image data, then used principal component analysis (PCA) for further validation. Among the five personality factors, agreeableness and conscientiousness in males and extraversion in females were significantly associated with specific facial patterns. The personality-related facial patterns were extracted and their effects were extrapolated on simulated 3D facial models.

  10. An optimized ERP brain-computer interface based on facial expression changes.

    PubMed

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  11. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  12. Tessier 3 Cleft in a Pre-Hispanic Anthropomorphic Figurine in El Salvador, Central America.

    PubMed

    Aleman, Ramon Manuel; Martinez, Maria Guadalupe

    2017-03-01

    In 1976, Paul Tessier provided a numerical classification system for rare facial clefts, numbered from 0 to 14. The Tessier 3 cleft is a rare facial cleft extending from the philtrum of the upper lip through the wing of the nostril, and reaches the medial canthus of the eye. The aim of this document was to describe a pre-Hispanic anthropomorphic figurine dating from the classic period (200 A.D.-900 A.D.), which has a Tessier 3 cleft. We also discuss the documented pre-Hispanic beliefs about facial clefts.

  13. Coherence explored between emotion components: evidence from event-related potentials and facial electromyography.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2014-04-01

    Componential theories assume that emotion episodes consist of emergent and dynamic response changes to relevant events in different components, such as appraisal, physiology, motivation, expression, and subjective feeling. In particular, Scherer's Component Process Model hypothesizes that subjective feeling emerges when the synchronization (or coherence) of appraisal-driven changes between emotion components has reached a critical threshold. We examined the prerequisite of this synchronization hypothesis for appraisal-driven response changes in facial expression. The appraisal process was manipulated by using feedback stimuli, presented in a gambling task. Participants' responses to the feedback were investigated in concurrently recorded brain activity related to appraisal (event-related potentials, ERP) and facial muscle activity (electromyography, EMG). Using principal component analysis, the prediction of appraisal-driven response changes in facial EMG was examined. Results support this prediction: early cognitive processes (related to the feedback-related negativity) seem to primarily affect the upper face, whereas processes that modulate P300 amplitudes tend to predominantly drive cheek region responses. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Effect of air bags and restraining devices on the pattern of facial fractures in motor vehicle crashes.

    PubMed

    Simoni, Payman; Ostendorf, Robert; Cox, Artemus J

    2003-01-01

    To examine the relationship between the use of restraining devices and the incidence of specific facial fractures in motor vehicle crashes. Retrospective analysis of patients with facial fractures following a motor vehicle crash. University of Alabama at Birmingham Hospital level I trauma center from 1996 to 2000. Of 3731 patients involved in motor vehicle crashes, a total of 497 patients were found to have facial fractures as determined by International Classification of Diseases, Ninth Revision (ICD-9) codes. Facial fractures were categorized as mandibular, orbital, zygomaticomaxillary complex (ZMC), and nasal. Use of seat belts alone was more effective in decreasing the chance of facial fractures in this population (from 17% to 8%) compared with the use of air bags alone (17% to 11%). The use of seat belts and air bags together decreased the incidence of facial fractures from 17% to 5%. Use of restraining devices in vehicles significantly reduces the chance of incurring facial fractures in a severe motor vehicle crash. However, use of air bags and seat belts does not change the pattern of facial fractures greatly except for ZMC fractures. Air bags are least effective in preventing ZMC fractures. Improving the mechanics of restraining devices might be needed to minimize facial fractures.

  15. Cranio-facial clefts in pre-hispanic America.

    PubMed

    Marius-Nunez, A L; Wasiak, D T

    2015-10-01

    Among the representations of congenital malformations in Moche ceramic art, cranio-facial clefts have been portrayed in pottery found in Moche burials. These pottery vessels were used as domestic items during lifetime and funerary offerings upon death. The aim of this study was to examine archeological evidence for representations of cranio-facial cleft malformations in Moche vessels. Pottery depicting malformations of the midface in Moche collections in Lima-Peru were studied. The malformations portrayed on pottery were analyzed using the Tessier classification. Photographs were authorized by the Larco Museo.Three vessels were observed to have median cranio-facial dysraphia in association with midline cleft of the lower lip with cleft of the mandible. ML001489 portrays a median cranio-facial dysraphia with an orbital cleft and a midline cleft of the lower lip extending to the mandible. ML001514 represents a median facial dysraphia in association with an orbital facial cleft and a vertical orbital dystopia. ML001491 illustrates a median facial cleft with a soft tissue cleft. Three cases of midline, orbital and lateral facial clefts have been portrayed in Moche full-figure portrait vessels. They represent the earliest registries of congenital cranio-facial malformations in ancient Peru. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    PubMed

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  17. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    PubMed Central

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  18. Impact of automobile restraint device utilization on facial fractures and fiscal implications for plastic surgeons.

    PubMed

    Adkinson, Joshua M; Murphy, Robert X

    2011-05-01

    In 2009, the National Highway Traffic Safety Administration projected that 33,963 people would die and millions would be injured in motor vehicle collisions (MVC). Multiple studies have evaluated the impact of restraint devices in MVCs. This study examines longitudinal changes in facial fractures after MVC as result of utilization of restraint devices. The Pennsylvania Trauma Systems Foundation-Pennsylvania Trauma Outcomes Study database was queried for MVCs from 1989 to 2009. Restraint device use was noted, and facial fractures were identified by International Classification of Diseases-ninth revision codes. Surgeon cost data were extrapolated. More than 15,000 patients sustained ≥1 facial fracture. Only orbital blowout fractures increased over 20 years. Patients were 2.1% less likely every year to have ≥1 facial fracture, which translated into decreased estimated surgeon charges. Increased use of protective devices by patients involved in MVCs resulted in a change in incidence of different facial fractures with reduced need for reconstructive surgery.

  19. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  20. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  1. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  2. Facial nerve mapping and monitoring in lymphatic malformation surgery.

    PubMed

    Chiara, Jospeh; Kinney, Greg; Slimp, Jefferson; Lee, Gi Soo; Oliaei, Sepehr; Perkins, Jonathan A

    2009-10-01

    Establish the efficacy of preoperative facial nerve mapping and continuous intraoperative EMG monitoring in protecting the facial nerve during resection of cervicofacial lymphatic malformations. Retrospective study in which patients were clinically followed for at least 6 months postoperatively, and long-term outcome was evaluated. Patient demographics, lesion characteristics (i.e., size, stage, location) were recorded. Operative notes revealed surgical techniques, findings, and complications. Preoperative, short-/long-term postoperative facial nerve function was standardized using the House-Brackmann Classification. Mapping was done prior to incision by percutaneously stimulating the facial nerve and its branches and recording the motor responses. Intraoperative monitoring and mapping were accomplished using a four-channel, free-running EMG. Neurophysiologists continuously monitored EMG responses and blindly analyzed intraoperative findings and final EMG interpretations for abnormalities. Seven patients collectively underwent 8 lymphatic malformation surgeries. Median age was 30 months (2-105 months). Lymphatic malformation diagnosis was recorded in 6/8 surgeries. Facial nerve function was House-Brackmann grade I in 8/8 cases preoperatively. Facial nerve was abnormally elongated in 1/8 cases. EMG monitoring recorded abnormal activity in 4/8 cases--two suggesting facial nerve irritation, and two with possible facial nerve damage. Transient or long-term facial nerve paresis occurred in 1/8 cases (House-Brackmann grade II). Preoperative facial nerve mapping combined with continuous intraoperative EMG and mapping is a successful method of identifying the facial nerve course and protecting it from injury during resection of cervicofacial lymphatic malformations involving the facial nerve.

  3. Attention to gaze and emotion in schizophrenia.

    PubMed

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  4. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  5. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  6. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    PubMed

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Emotion categories and dimensions in the facial communication of affect: An integrated approach.

    PubMed

    Mehu, Marc; Scherer, Klaus R

    2015-12-01

    We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).

  8. Role of facial attractiveness in patients with slight-to-borderline treatment need according to the Aesthetic Component of the Index of Orthodontic Treatment Need as judged by eye tracking.

    PubMed

    Johnson, Elizabeth K; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2017-02-01

    Previous eye-tracking research has demonstrated that laypersons view the range of dental attractiveness levels differently depending on facial attractiveness levels. How the borderline levels of dental attractiveness are viewed has not been evaluated in the context of facial attractiveness and compared with those with near-ideal esthetics or those in definite need of orthodontic treatment according to the Aesthetic Component of the Index of Orthodontic Treatment Need scale. Our objective was to determine the level of viewers' visual attention in its treatment need categories levels 3 to 7 for persons considered "attractive," "average," or "unattractive." Facial images of persons at 3 facial attractiveness levels were combined with 5 levels of dental attractiveness (dentitions representing Aesthetic Component of the Index of Orthodontic Treatment Need levels 3-7) using imaging software to form 15 composite images. Each image was viewed twice by 66 lay participants using eye tracking. Both the fixation density (number of fixations per facial area) and the fixation duration (length of time for each facial area) were quantified for each image viewed. Repeated-measures analysis of variance was used to determine how fixation density and duration varied among the 6 facial interest areas (chin, ear, eye, mouth, nose, and other). Viewers demonstrated excellent to good reliability among the 6 interest areas (intraviewer reliability, 0.70-0.96; interviewer reliability, 0.56-0.93). Between Aesthetic Component of the Index of Orthodontic Treatment Need levels 3 and 7, viewers of all facial attractiveness levels showed an increase in attention to the mouth. However, only with the attractive models were significant differences in fixation density and duration found between borderline levels with female viewers. Female viewers paid attention to different areas of the face than did male viewers. The importance of dental attractiveness is amplified in facially attractive female models compared with average and unattractive female models between near-ideal and borderline-severe dentally unattractive levels. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  9. Illuminant color estimation based on pigmentation separation from human skin color

    NASA Astrophysics Data System (ADS)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2015-03-01

    Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.

  10. Non-invasive health status detection system using Gabor filters based on facial block texture features.

    PubMed

    Shu, Ting; Zhang, Bob

    2015-04-01

    Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.

  11. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    PubMed

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  12. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance

    PubMed Central

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18–25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects’ evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. In conclusion: 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population. PMID:26562655

  13. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    NASA Astrophysics Data System (ADS)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  14. Modern concepts in facial nerve reconstruction

    PubMed Central

    2010-01-01

    Background Reconstructive surgery of the facial nerve is not daily routine for most head and neck surgeons. The published experience on strategies to ensure optimal functional results for the patients are based on small case series with a large variety of surgical techniques. On this background it is worthwhile to develop a standardized approach for diagnosis and treatment of patients asking for facial rehabilitation. Conclusion A standardized approach is feasible: Patients with chronic facial palsy first need an exact classification of the palsy's aetiology. A step-by-step clinical examination, if necessary MRI imaging and electromyographic examination allow a classification of the palsy's aetiology as well as the determination of the severity of the palsy and the functional deficits. Considering the patient's desire, age and life expectancy, an individual surgical concept is applicable using three main approaches: a) early extratemporal reconstruction, b) early reconstruction of proximal lesions if extratemporal reconstruction is not possible, c) late reconstruction or in cases of congenital palsy. Twelve to 24 months after the last step of surgical reconstruction a standardized evaluation of the therapeutic results is recommended to evaluate the necessity for adjuvant surgical procedures or other adjuvant procedures, e.g. botulinum toxin application. Up to now controlled trials on the value of physiotherapy and other adjuvant measures are missing to give recommendation for optimal application of adjuvant therapies. PMID:21040532

  15. The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression.

    PubMed

    Holmes, Amanda; Winston, Joel S; Eimer, Martin

    2005-10-01

    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.

  16. [Integration of the functional signal of intraoperative EMG of the facial nerve in to navigation model for surgery of the petrous bone].

    PubMed

    Strauss, G; Strauss, M; Lüders, C; Stopp, S; Shi, J; Dietz, A; Lüth, T

    2008-10-01

    PROBLEM DEFINITION: The goal of this work is the integration of the information of the intraoperative EMG monitoring of the facial nerve into the radiological data of the petrous bone. The following hypotheses are to be examined: (I) the N. VII can be determined intraoperatively with a high reliability by the stimulation-probe. A computer program is able to discriminate true-positive EMG signals from false-positive artifacts. (II) The course of the facial nerve can be registered in a three-dimensional area by EMG signals at a nerve model in the lab test. The individual items of the nerve can be combined into a route model. The route model can be integrated into the data of digital volume tomography (DVT). (I) Intraoperative EMG signals of the facial nerve were classified at 128 measurements by an automatic software. The results were correlated with the actual intraoperative situation. (II) The nerve phantom was designed and a DVT data set was provided. Phantom was registered with a navigation system (Karl Storz NPU, Tuttlingen, Germany). The stimulation probe of the EMG-system was tracked by the navigation system. The navigation system was extended by a processing unit (MiMed, Technische Universität München, Germany). Thus the classified EMG parameters of the facial route can be received, processed and be generated to a model of the facial nerve route. The operability was examined at 120 (10 x 12) measuring points. The evaluation of the examined algorithm for classification EMG-signals of the facial nerve resulted as correct in all measuring events. In all 10 attempts it succeeded to visualize the nerve route as three-dimensional model. The different sizes of the individual measuring points reflect the appropriate values of Istim and UEMG correctly. This work proves the feasibility of an automatic classification of an intraoperative EMG signal of the facial nerve by a processing unit. Furthermore the work shows the feasibility of tracking of the position of the stimulation probe and its integration into amodel of the route of the facial nerve (e. g. DVT). The rediability, with which the position of the nerve can be seized by the stimulation probe, is also included into the resulting route model.

  17. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  18. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    PubMed

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  19. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical axis. Published by Elsevier Ltd.

  20. Facial expressions of emotion and psychopathology in adolescent boys.

    PubMed

    Keltner, D; Moffitt, T E; Stouthamer-Loeber, M

    1995-11-01

    On the basis of the widespread belief that emotions underpin psychological adjustment, the authors tested 3 predicted relations between externalizing problems and anger, internalizing problems and fear and sadness, and the absence of externalizing problems and social-moral emotion (embarrassment). Seventy adolescent boys were classified into 1 of 4 comparison groups on the basis of teacher reports using a behavior problem checklist: internalizers, externalizers, mixed (both internalizers and externalizers), and nondisordered boys. The authors coded the facial expressions of emotion shown by the boys during a structured social interaction. Results supported the 3 hypotheses: (a) Externalizing adolescents showed increased facial expressions of anger, (b) on 1 measure internalizing adolescents showed increased facial expressions of fear, and (c) the absence of externalizing problems (or nondisordered classification) was related to increased displays of embarrassment. Discussion focused on the relations of these findings to hypotheses concerning the role of impulse control in antisocial behavior.

  1. Assessing paedophilia based on the haemodynamic brain response to face images.

    PubMed

    Ponseti, Jorge; Granert, Oliver; Van Eimeren, Thilo; Jansen, Olav; Wolff, Stephan; Beier, Klaus; Deuschl, Günther; Huchzermeier, Christian; Stirn, Aglaja; Bosinski, Hartmut; Roman Siebner, Hartwig

    2016-01-01

    Objective assessment of sexual preferences may be of relevance in the treatment and prognosis of child sexual offenders. Previous research has indicated that this can be achieved by pattern classification of brain responses to sexual child and adult images. Our recent research showed that human face processing is tuned to sexual age preferences. This observation prompted us to test whether paedophilia can be inferred based on the haemodynamic brain responses to adult and child faces. Twenty-four men sexually attracted to prepubescent boys or girls (paedophiles) and 32 men sexually attracted to men or women (teleiophiles) were exposed to images of child and adult, male and female faces during a functional magnetic resonance imaging (fMRI) session. A cross-validated, automatic pattern classification algorithm of brain responses to facial stimuli yielded four misclassified participants (three false positives), corresponding to a specificity of 91% and a sensitivity of 95%. These results indicate that the functional response to facial stimuli can be reliably used for fMRI-based classification of paedophilia, bypassing the problem of showing child sexual stimuli to paedophiles.

  2. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  3. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  4. The neurosurgical treatment of neuropathic facial pain.

    PubMed

    Brown, Jeffrey A

    2014-04-01

    This article reviews the definition, etiology and evaluation, and medical and neurosurgical treatment of neuropathic facial pain. A neuropathic origin for facial pain should be considered when evaluating a patient for rhinologic surgery because of complaints of facial pain. Neuropathic facial pain is caused by vascular compression of the trigeminal nerve in the prepontine cistern and is characterized by an intermittent prickling or stabbing component or a constant burning, searing pain. Medical treatment consists of anticonvulsant medication. Neurosurgical treatment may require microvascular decompression of the trigeminal nerve. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. The importance of skin color and facial structure in perceiving and remembering others: an electrophysiological study.

    PubMed

    Brebner, Joanne L; Krigolson, Olav; Handy, Todd C; Quadflieg, Susanne; Turk, David J

    2011-05-04

    The own-race bias (ORB) is a well-documented recognition advantage for own-race (OR) over cross-race (CR) faces, the origin of which remains unclear. In the current study, event-related potentials (ERPs) were recorded while Caucasian participants age-categorized Black and White faces which were digitally altered to display either a race congruent or incongruent facial structure. The results of a subsequent surprise memory test indicated that regardless of facial structure participants recognized White faces better than Black faces. Additional analyses revealed that temporally-early ERP components associated with face-specific perceptual processing (N170) and the individuation of facial exemplars (N250) were selectively sensitive to skin color. In addition, the N200 (a component that has been linked to increased attention and depth of encoding afforded to in-group and OR faces) was modulated by color and structure, and correlated with subsequent memory performance. However, the LPP component associated with the cognitive evaluation of perceptual input was influenced by racial differences in facial structure alone. These findings suggest that racial differences in skin color and facial structure are detected during the encoding of unfamiliar faces, and that the categorization of conspecifics as members of our social in-group on the basis of their skin color may be a determining factor in our ability to subsequently remember them. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Combined flaps based on the superficial temporal vascular system for reconstruction of facial defects.

    PubMed

    Zhou, Renpeng; Wang, Chen; Qian, Yunliang; Wang, Danru

    2015-09-01

    Facial defects are multicomponent deficiencies rather than simple soft-tissue defects. Based on different branches of the superficial temporal vascular system, various tissue components can be obtained to reconstruct facial defects individually. From January 2004 to December 2013, 31 patients underwent reconstruction of facial defects with composite flaps based on the superficial temporal vascular system. Twenty cases of nasal defects were repaired with skin and cartilage components, six cases of facial defects were treated with double island flaps of the skin and fascia, three patients underwent eyebrow and lower eyelid reconstruction with hairy and hairless flaps simultaneously, and two patients underwent soft-tissue repair with auricular combined flaps and cranial bone grafts. All flaps survived completely. Donor-site morbidity is minimal, closed primarily. Donor areas healed with acceptable cosmetic results. The final outcome was satisfactory. Combined flaps based on the superficial temporal vascular system are a useful and versatile option in facial soft-tissue reconstruction. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  8. Effects of a small talking facial image on autonomic activity: the moderating influence of dispositional BIS and BAS sensitivities and emotions.

    PubMed

    Ravaja, Niklas

    2004-01-01

    We examined the moderating influence of dispositional behavioral inhibition system and behavioral activation system (BAS) sensitivities, Negative Affect, and Positive Affect on the relationship between a small moving vs. static facial image and autonomic responses when viewing/listening to news messages read by a newscaster among 36 young adults. Autonomic parameters measured were respiratory sinus arrhythmia (RSA), low-frequency (LF) component of heart rate variability (HRV), electrodermal activity, and pulse transit time (PTT). The results showed that dispositional BAS sensitivity, particularly BAS Fun Seeking, and Negative Affect interacted with facial image motion in predicting autonomic nervous system activity. A moving facial image was related to lower RSA and LF component of HRV and shorter PTTs as compared to a static facial image among high BAS individuals. Even a small talking facial image may contribute to sustained attentional engagement among high BAS individuals, given that the BAS directs attention toward the positive cue and a moving social stimulus may act as a positive incentive for high BAS individuals.

  9. Facial nerve palsy: analysis of cases reported in children in a suburban hospital in Nigeria.

    PubMed

    Folayan, M O; Arobieke, R I; Eziyi, E; Oyetola, E O; Elusiyan, J

    2014-01-01

    The study describes the epidemiology, treatment, and treatment outcomes of the 10 cases of facial nerve palsy seen in children managed at the Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife over a 10 year period. It also compares findings with report from developed countries. This was a retrospective cohort review of pediatric cases of facial nerve palsy encountered in all the clinics run by specialists in the above named hospital. A diagnosis of facial palsy was based on International Classification of Diseases, Ninth Revision, Clinical Modification codes. Information retrieved from the case note included sex, age, number of days with lesion prior to presentation in the clinic, diagnosis, treatment, treatment outcome, and referral clinic. Only 10 cases of facial nerve palsy were diagnosed in the institution during the study period. Prevalence of facial nerve palsy in this hospital was 0.01%. The lesion more commonly affected males and the right side of the face. All cases were associated with infections: Mainly mumps (70% of cases). Case management include the use of steroids and eye pads for cases that presented within 7 days; and steroids, eye pad, and physical therapy for cases that presented later. All cases of facial nerve palsy associated with mumps and malaria infection fully recovered. The two cases of facial nerve palsy associated with otitis media only partially recovered. Facial nerve palsy in pediatric patients is more commonly associated with mumps in the study environment. Successes are recorded with steroid therapy.

  10. Facial EMG responses to emotional expressions are related to emotion perception ability.

    PubMed

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  11. Facial EMG Responses to Emotional Expressions Are Related to Emotion Perception Ability

    PubMed Central

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective. PMID:24489647

  12. Affective Computing and the Impact of Gender and Age

    PubMed Central

    Rukavina, Stefanie; Gruss, Sascha; Hoffmann, Holger; Tan, Jun-Wen; Walter, Steffen; Traue, Harald C.

    2016-01-01

    Affective computing aims at the detection of users’ mental states, in particular, emotions and dispositions during human-computer interactions. Detection can be achieved by measuring multimodal signals, namely, speech, facial expressions and/or psychobiology. Over the past years, one major approach was to identify the best features for each signal using different classification methods. Although this is of high priority, other subject-specific variables should not be neglected. In our study, we analyzed the effect of gender, age, personality and gender roles on the extracted psychobiological features (derived from skin conductance level, facial electromyography and heart rate variability) as well as the influence on the classification results. In an experimental human-computer interaction, five different affective states with picture material from the International Affective Picture System and ULM pictures were induced. A total of 127 subjects participated in the study. Among all potentially influencing variables (gender has been reported to be influential), age was the only variable that correlated significantly with psychobiological responses. In summary, the conducted classification processes resulted in 20% classification accuracy differences according to age and gender, especially when comparing the neutral condition with four other affective states. We suggest taking age and gender specifically into account for future studies in affective computing, as these may lead to an improvement of emotion recognition accuracy. PMID:26939129

  13. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  14. Human Facial Shape and Size Heritability and Genetic Correlations.

    PubMed

    Cole, Joanne B; Manyama, Mange; Larson, Jacinda R; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Li, Mao; Mio, Washington; Klein, Ophir D; Santorico, Stephanie A; Hallgrímsson, Benedikt; Spritz, Richard A

    2017-02-01

    The human face is an array of variable physical features that together make each of us unique and distinguishable. Striking familial facial similarities underscore a genetic component, but little is known of the genes that underlie facial shape differences. Numerous studies have estimated facial shape heritability using various methods. Here, we used advanced three-dimensional imaging technology and quantitative human genetics analysis to estimate narrow-sense heritability, heritability explained by common genetic variation, and pairwise genetic correlations of 38 measures of facial shape and size in normal African Bantu children from Tanzania. Specifically, we fit a linear mixed model of genetic relatedness between close and distant relatives to jointly estimate variance components that correspond to heritability explained by genome-wide common genetic variation and variance explained by uncaptured genetic variation, the sum representing total narrow-sense heritability. Our significant estimates for narrow-sense heritability of specific facial traits range from 28 to 67%, with horizontal measures being slightly more heritable than vertical or depth measures. Furthermore, for over half of facial traits, >90% of narrow-sense heritability can be explained by common genetic variation. We also find high absolute genetic correlation between most traits, indicating large overlap in underlying genetic loci. Not surprisingly, traits measured in the same physical orientation (i.e., both horizontal or both vertical) have high positive genetic correlations, whereas traits in opposite orientations have high negative correlations. The complex genetic architecture of facial shape informs our understanding of the intricate relationships among different facial features as well as overall facial development. Copyright © 2017 by the Genetics Society of America.

  15. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    PubMed

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  16. Foveation: an alternative method to simultaneously preserve privacy and information in face images

    NASA Astrophysics Data System (ADS)

    Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique

    2017-03-01

    This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.

  17. Automatic sleep stage classification using two facial electrodes.

    PubMed

    Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel

    2008-01-01

    Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.

  18. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  19. Are Happy Faces Attractive? The Roles of Early vs. Late Processing

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Fan, Jintu; Wu, Yi; Lee, Tatia M. C.

    2015-01-01

    Facial attractiveness is closely related to romantic love. To understand if the neural underpinnings of perceived facial attractiveness and facial expression are similar constructs, we recorded neural signals using an event-related potential (ERP) methodology for 20 participants who were viewing faces with varied attractiveness and expressions. We found that attractiveness and expression were reflected by two early components, P2-lateral (P2l) and P2-medial (P2m), respectively; their interaction effect was reflected by LPP, a late component. The findings suggested that facial attractiveness and expression are first processed in parallel for discrimination between stimuli. After the initial processing, more attentional resources are allocated to the faces with the most positive or most negative valence in both the attractiveness and expression dimensions. The findings contribute to the theoretical model of face perception. PMID:26648885

  20. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  1. Surface facial modelling and allometry in relation to sexual dimorphism.

    PubMed

    Velemínská, J; Bigoni, L; Krajíček, V; Borský, J; Šmahelová, D; Cagáňová, V; Peterka, M

    2012-04-01

    Sexual dimorphism is responsible for a substantial part of human facial variability, the study of which is essential for many scientific fields ranging from evolution to special biomedical topics. Our aim was to analyse the relationship between size variability and shape facial variability of sexual traits in the young adult Central European population and to construct average surface models of adult males and females. The method of geometric morphometrics allowed not only the identification of dimorphic traits, but also the evaluation of static allometry and the visualisation of sexual facial differences. Facial variability in the studied sample was characterised by a strong relationship between facial size and shape of sexual dimorphic traits. Large size of face was associated with facial elongation and vice versa. Regarding shape sexual dimorphic traits, a wide, vaulted and high forehead in combination with a narrow and gracile lower face were typical for females. Variability in shape dimorphic traits was smaller in females compared to males. For female classification, shape sexual dimorphic traits are more important, while for males the stronger association is with face size. Males generally had a closer inter-orbital distance and a deeper position of the eyes in relation to the facial plane, a larger and wider straight nose and nostrils, and more massive lower face. Using pseudo-colour maps to provide a detailed schematic representation of the geometrical differences between the sexes, we attempted to clarify the reasons underlying the development of such differences. Copyright © 2012 Elsevier GmbH. All rights reserved.

  2. Evaluation of facial attractiveness in black people according to the subjective facial analysis criteria.

    PubMed

    Melo, Andréa Reis de; Conti, Ana Cláudia de Castro Ferreira; Almeida-Pedrin, Renata Rodrigues; Didier, Victor; Valarelli, Danilo Pinelli; Capelozza Filho, Leopoldino

    2017-02-01

    The objective of this study was to evaluate the facial attractiveness in 30 black individuals, according to the Subjective Facial Analysis criteria. Frontal and profile view photographs of 30 black individuals were evaluated for facial attractiveness and classified as esthetically unpleasant, acceptable, or pleasant by 50 evaluators: the 30 individuals from the sample, 10 orthodontists, and 10 laymen. Besides assessing the facial attractiveness, the evaluators had to identify the structures responsible for the classification as unpleasant and pleasant. Intraexaminer agreement was assessed by using Spearman's correlation, correlation within each category using Kendall concordance coefficient, and correlation between the 3 categories using chi-square test and proportions. Most of the frontal (53. 5%) and profile view (54. 9%) photographs were classified as esthetically acceptable. The structures most identified as esthetically unpleasant were the mouth, lips, and face, in the frontal view; and nose and chin in the profile view. The structures most identified as esthetically pleasant were harmony, face, and mouth, in the frontal view; and harmony and nose in the profile view. The ratings by the examiners in the sample and laymen groups showed statistically significant correlation in both views. The orthodontists agreed with the laymen on the evaluation of the frontal view and disagreed on profile view, especially regarding whether the images were esthetically unpleasant or acceptable. Based on these results, the evaluation of facial attractiveness according to the Subjective Facial Analysis criteria proved to be applicable and to have a subjective influence; therefore, it is suggested that the patient's opinion regarding the facial esthetics should be considered in orthodontic treatmentplanning.

  3. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  4. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  5. Facial disability index (FDI): Adaptation to Spanish, reliability and validity

    PubMed Central

    Gonzalez-Cardero, Eduardo; Cayuela, Aurelio; Acosta-Feria, Manuel; Gutierrez-Perez, Jose-Luis

    2012-01-01

    Objectives: To adapt to Spanish the facial disability index (FDI) described by VanSwearingen and Brach in 1995 and to assess its reliability and validity in patients with facial nerve paresis after parotidectomy. Study Design: The present study was conducted in two different stages: a) cross-cultural adaptation of the questionnaire and b) cross-sectional study of a control group of 79 Spanish-speaking patients who suffered facial paresis after superficial parotidectomy with facial nerve preservation. The cross-cultural adaptation process comprised the following stages: (I) initial translation, (II) synthesis of the translated document, (III) retro-translation, (IV) review by a board of experts, (V) pilot study of the pre-final draft and (VI) analysis of the pilot study and final draft. Results: The reliability and internal consistency of every one of the rating scales included in the FDI (Cronbach’s alpha coefficient) was 0.83 for the complete scale and 0.77 and 0.82 for the physical and the social well-being subscales. The analysis of the factorial validity of the main components of the adapted FDI yielded similar results to the original questionnaire. Bivariate correlations between FDI and House-Brackmann scale were positive. The variance percentage was calculated for all FDI components. Conclusions: The FDI questionnaire is a specific instrument for assessing facial neuromuscular dysfunction which becomes a useful tool in order to determine quality of life in patients with facial nerve paralysis. Spanish adapted FDI is equivalent to the original questionnaire and shows similar reliability and validity. The proven reproducibi-lity, reliability and validity of this questionnaire make it a useful additional tool for evaluating the impact of facial nerve paralysis in Spanish-speaking patients. Key words:Parotidectomy, facial nerve paralysis, facial disability. PMID:22926474

  6. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  7. Assessment of facial golden proportions among young Japanese women.

    PubMed

    Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C

    2009-08-01

    Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).

  8. Computerized measurement of facial expression of emotions in schizophrenia.

    PubMed

    Alvino, Christopher; Kohler, Christian; Barrett, Frederick; Gur, Raquel E; Gur, Ruben C; Verma, Ragini

    2007-07-30

    Deficits in the ability to express emotions characterize several neuropsychiatric disorders and are a hallmark of schizophrenia, and there is need for a method of quantifying expression, which is currently done by clinical ratings. This paper presents the development and validation of a computational framework for quantifying emotional expression differences between patients with schizophrenia and healthy controls. Each face is modeled as a combination of elastic regions, and expression changes are modeled as a deformation between a neutral face and an expressive face. Functions of these deformations, known as the regional volumetric difference (RVD) functions, form distinctive quantitative profiles of expressions. Employing pattern classification techniques, we have designed expression classifiers for the four universal emotions of happiness, sadness, anger and fear by training on RVD functions of expression changes. The classifiers were cross-validated and then applied to facial expression images of patients with schizophrenia and healthy controls. The classification score for each image reflects the extent to which the expressed emotion matches the intended emotion. Group-wise statistical analysis revealed this score to be significantly different between healthy controls and patients, especially in the case of anger. This score correlated with clinical severity of flat affect. These results encourage the use of such deformation based expression quantification measures for research in clinical applications that require the automated measurement of facial affect.

  9. Spatially generalizable representations of facial expressions: Decoding across partial face samples.

    PubMed

    Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W

    2018-04-01

    A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  11. Home-use TriPollar RF device for facial skin tightening: Clinical study results.

    PubMed

    Beilin, Ghislaine

    2011-04-01

    Professional, non-invasive, anti-aging treatments based on radio-frequency (RF) technologies are popular for skin tightening and improvement of wrinkles. A new home-use RF device for facial treatments has recently been developed based on TriPollar™ technology. To evaluate the STOP™ home-use device for facial skin tightening using objective and subjective methods. Twenty-three female subjects used the STOP at home for a period of 6 weeks followed by a maintenance period of 6 weeks. Facial skin characteristics were objectively evaluated at baseline and at the end of the treatment and maintenance periods using a three-dimensional imaging system. Additionally, facial wrinkles were classified and subjects scored their satisfaction and sensations. Following STOP treatment, a statistically significant reduction of perioral and periorbital wrinkles was achieved in 90% and 95% of the patients, respectively, with an average periorbital wrinkle reduction of 41%. This objective result correlated well with the periorbital wrinkle classification result of 40%. All patients were satisfied to extremely satisfied with the treatments and all reported moderate to excellent visible results. The clinical study demonstrated the safety and efficacy of the STOP home-use device for facial skin tightening. Treatment can maintain a tighter and suppler skin with improvement of fine lines and wrinkles.

  12. Maxillectomy defects: a suggested classification scheme.

    PubMed

    Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F

    2013-06-01

    The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.

  13. SNR-adaptive stream weighting for audio-MES ASR.

    PubMed

    Lee, Ki-Seung

    2008-08-01

    Myoelectric signals (MESs) from the speaker's mouth region have been successfully shown to improve the noise robustness of automatic speech recognizers (ASRs), thus promising to extend their usability in implementing noise-robust ASR. In the recognition system presented herein, extracted audio and facial MES features were integrated by a decision fusion method, where the likelihood score of the audio-MES observation vector was given by a linear combination of class-conditional observation log-likelihoods of two classifiers, using appropriate weights. We developed a weighting process adaptive to SNRs. The main objective of the paper involves determining the optimal SNR classification boundaries and constructing a set of optimum stream weights for each SNR class. These two parameters were determined by a method based on a maximum mutual information criterion. Acoustic and facial MES data were collected from five subjects, using a 60-word vocabulary. Four types of acoustic noise including babble, car, aircraft, and white noise were acoustically added to clean speech signals with SNR ranging from -14 to 31 dB. The classification accuracy of the audio ASR was as low as 25.5%. Whereas, the classification accuracy of the MES ASR was 85.2%. The classification accuracy could be further improved by employing the proposed audio-MES weighting method, which was as high as 89.4% in the case of babble noise. A similar result was also found for the other types of noise.

  14. Neural correlates of mirth and laughter: a direct electrical cortical stimulation study.

    PubMed

    Yamao, Yukihiro; Matsumoto, Riki; Kunieda, Takeharu; Shibata, Sumiya; Shimotake, Akihiro; Kikuchi, Takayuki; Satow, Takeshi; Mikuni, Nobuhiro; Fukuyama, Hidenao; Ikeda, Akio; Miyamoto, Susumu

    2015-05-01

    Laughter consists of both motor and emotional aspects. The emotional component, known as mirth, is usually associated with the motor component, namely, bilateral facial movements. Previous electrical cortical stimulation (ES) studies revealed that mirth was associated with the basal temporal cortex, inferior frontal cortex, and medial frontal cortex. Functional neuroimaging implicated a role for the left inferior frontal and bilateral temporal cortices in humor processing. However, the neural origins and pathways linking mirth with facial movements are still unclear. We hereby report two cases with temporal lobe epilepsy undergoing subdural electrode implantation in whom ES of the left basal temporal cortex elicited both mirth and laughter-related facial muscle movements. In one case with normal hippocampus, high-frequency ES consistently caused contralateral facial movement, followed by bilateral facial movements with mirth. In contrast, in another case with hippocampal sclerosis (HS), ES elicited only mirth at low intensity and short duration, and eventually laughter at higher intensity and longer duration. In both cases, the basal temporal language area (BTLA) was located within or adjacent to the cortex where ES produced mirth. In conclusion, the present direct ES study demonstrated that 1) mirth had a close relationship with language function, 2) intact mesial temporal structures were actively engaged in the beginning of facial movements associated with mirth, and 3) these emotion-related facial movements had contralateral dominance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A novel BCI based on ERP components sensitive to configural processing of human faces

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  16. A novel BCI based on ERP components sensitive to configural processing of human faces.

    PubMed

    Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  17. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study

    PubMed Central

    Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-01-01

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211

  18. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.

    PubMed

    Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-10-25

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.

  19. Linear measurements of the neurocranium are better indicators of population differences than those of the facial skeleton: comparative study of 1,961 skulls.

    PubMed

    Holló, Gábor; Szathmáry, László; Marcsik, Antónia; Barta, Zoltán

    2010-02-01

    The aim of this study is to individualize potential differences between two cranial regions used to differentiate human populations. We compared the neurocranium and the facial skeleton using skulls from the Great Hungarian Plain. The skulls date to the 1st-11th centuries, a long space of time that encompasses seven archaeological periods. We analyzed six neurocranial and seven facial measurements. The reduction of the number of variables was carried out using principal components analysis. Linear mixed-effects models were fitted to the principal components of each archaeological period, and then the models were compared using multiple pairwise tests. The neurocranium showed significant differences in seven cases between nonsubsequent periods and in one case, between two subsequent populations. For the facial skeleton, no significant results were found. Our results, which are also compared to previous craniofacial heritability estimates, suggest that the neurocranium is a more conservative region and that population differences can be pointed out better in the neurocranium than in the facial skeleton.

  20. Evaluation of facial attractiveness from end-of-treatment facial photographs.

    PubMed

    Shafiee, Roxanne; Korn, Edward L; Pearson, Helmer; Boyd, Robert L; Baumrind, Sheldon

    2008-04-01

    Orthodontists typically make judgments of facial attractiveness by examining groupings of profile, full-face, and smiling photographs considered together as a "triplet." The primary objective of this study was to determine the relative contributions of the 3 photographs-each considered separately-to the overall judgment a clinician forms by examining the combination of the 3. End-of-treatment triplet orthodontic photographs of 45 randomly selected orthodontic patients were duplicated. Copies of the profile, full-face, and smiling images were generated, and the images were separated and then pooled by image type for all subjects. Ten judges ranked the 45 photographs of each image type for facial attractiveness in groups of 9 to 12, from "most attractive" to "least attractive." Each judge also ranked the triplet groupings for the same 45 subjects. The mean attractiveness rankings for each type of photograph were then correlated with the mean rankings of each other and the triplets. The rankings of the 3 image types correlated highly with each other and the rankings of the triplets (P <.0001). The rankings of the smiling photographs were most predictive of the rankings of the triplets (r = 0.93); those of the profile photographs were the least predictive (r = 0.76). The difference between these correlations was highly statistically significant (P = .0003). It was also possible to test the extent to which the judges' rankings were influenced by sex, original Angle classification, and extraction status of each patient. No statistically significant preferences were found for sex or Angle classification, and only 1 marginally significant preference was found for extraction pattern. Clinician judges demonstrated a high level of agreement in ranking the facial attractiveness of profile, full-face, and smiling photographs of a group of orthodontically treated patients whose actual differences in physical dimensions were relatively small. The judges' rankings of the smiling photographs were significantly better predictors of their rankings of the triplet of each patient than were their rankings of the profile photographs.

  1. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  2. Age and gender estimation using Region-SIFT and multi-layered SVM

    NASA Astrophysics Data System (ADS)

    Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun

    2018-04-01

    In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.

  3. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  4. Pattern learning with deep neural networks in EMG-based speech recognition.

    PubMed

    Wand, Michael; Schultz, Tanja

    2014-01-01

    We report on classification of phones and phonetic features from facial electromyographic (EMG) data, within the context of our EMG-based Silent Speech interface. In this paper we show that a Deep Neural Network can be used to perform this classification task, yielding a significant improvement over conventional Gaussian Mixture models. Our central contribution is the visualization of patterns which are learned by the neural network. With increasing network depth, these patterns represent more and more intricate electromyographic activity.

  5. Classification of parotidectomies: a proposal of the European Salivary Gland Society.

    PubMed

    Quer, M; Guntinas-Lichius, O; Marchal, F; Vander Poorten, V; Chevalier, D; León, X; Eisele, D; Dulguerov, P

    2016-10-01

    The objective of this study is to provide a comprehensive classification system for parotidectomy operations. Data sources include Medline publications, author's experience, and consensus round table at the Third European Salivary Gland Society (ESGS) Meeting. The Medline database was searched with the term "parotidectomy" and "definition". The various definitions of parotidectomy procedures and parotid gland subdivisions extracted. Previous classification systems re-examined and a new classification proposed by a consensus. The ESGS proposes to subdivide the parotid parenchyma in five levels: I (lateral superior), II (lateral inferior), III (deep inferior), IV (deep superior), V (accessory). A new classification is proposed where the type of resection is divided into formal parotidectomy with facial nerve dissection and extracapsular dissection. Parotidectomies are further classified according to the levels removed, as well as the extra-parotid structures ablated. A new classification of parotidectomy procedures is proposed.

  6. Accessory oral cavity associated with duplication of the tongue and the mandible in a newborn: a rare case of Diprosopus. Multi-row detector computed tomography diagnostic role.

    PubMed

    Morabito, Rosa; Colonna, Michele R; Mormina, Enricomaria; Stagno d'Alcontres, Ferdinando; Salpietro, Vincenzo; Blandino, Alfredo; Longo, Marcello; Granata, Francesca

    2014-12-01

    Craniofacial duplication is a very rare malformation. The phenotype comprises a wide spectrum, ranging from partial duplication of few facial structures to complete dicephalus. We report the case of a newborn with an accessory oral cavity associated to duplication of the tongue and the mandible diagnosed by multi-row detector Computed Tomography, few days after her birth. Our case of partial craniofacial duplication can be considered as Type II of Gorlin classification or as an intermediate form between Type I and Type II of Sun classification. Our experience demonstrates that CT scan, using appropriate reconstruction algorithms, permits a detailed evaluation of the different structures in an anatomical region. Multi-row CT scan is also the more accurate diagnostic procedure for the pre-surgical evaluation of craniofacial malformations. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  7. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  8. Detection of Terrorist Preparations by an Artificial Intelligence Expert System Employing Fuzzy Signal Detection Theory

    DTIC Science & Technology

    2004-10-25

    FUSEDOT does not require facial recognition , or video surveillance of public areas, both of which are apparently a component of TIA ([26], pp...does not use fuzzy signal detection. Involves facial recognition and video surveillance of public areas. Involves monitoring the content of voice...fuzzy signal detection, which TIA does not. Second, FUSEDOT would be easier to develop, because it does not require the development of facial

  9. Tryptophan depletion decreases the recognition of fear in female volunteers.

    PubMed

    Harmer, C J; Rogers, R D; Tunbridge, E; Cowen, P J; Goodwin, G M

    2003-06-01

    Serotonergic processes have been implicated in the modulation of fear conditioning in humans, postulated to occur at the level of the amygdala. The processing of other fear-relevant cues, such as facial expressions, has also been associated with amygdala function, but an effect of serotonin depletion on these processes has not been assessed. The present study investigated the effects of reducing serotonin function, using acute tryptophan depletion, on the recognition of basic facial expressions of emotions in healthy male and female volunteers. A double-blind between-groups design was used, with volunteers being randomly allocated to receive an amino acid drink specifically lacking tryptophan or a control mixture containing a balanced mixture of these amino acids. Participants were given a facial expression recognition task 5 h after drink administration. This task featured examples of six basic emotions (fear, anger, disgust, surprise, sadness and happiness) that had been morphed between each full emotion and neutral in 10% steps. As a control, volunteers were given a famous face classification task matched in terms of response selection and difficulty level. Tryptophan depletion significantly impaired the recognition of fearful facial expressions in female, but not male, volunteers. This was specific since recognition of other basic emotions was comparable in the two groups. There was also no effect of tryptophan depletion on the classification of famous faces or on subjective state ratings of mood or anxiety. These results confirm a role for serotonin in the processing of fear related cues, and in line with previous findings also suggest greater effects of tryptophan depletion in female volunteers. Although acute tryptophan depletion does not typically affect mood in healthy subjects, the present results suggest that subtle changes in the processing of emotional material may occur with this manipulation of serotonin function.

  10. Values of a Patient and Observer Scar Assessment Scale to Evaluate the Facial Skin Graft Scar.

    PubMed

    Chae, Jin Kyung; Kim, Jeong Hee; Kim, Eun Jung; Park, Kun

    2016-10-01

    The patient and observer scar assessment scale (POSAS) recently emerged as a promising method, reflecting both observer's and patient's opinions in evaluating scar. This tool was shown to be consistent and reliable in burn scar assessment, but it has not been tested in the setting of skin graft scar in skin cancer patients. To evaluate facial skin graft scar applied to POSAS and to compare with objective scar assessment tools. Twenty three patients, who diagnosed with facial cutaneous malignancy and transplanted skin after Mohs micrographic surgery, were recruited. Observer assessment was performed by three independent rates using the observer component of the POSAS and Vancouver scar scale (VSS). Patient self-assessment was performed using the patient component of the POSAS. To quantify scar color and scar thickness more objectively, spectrophotometer and ultrasonography was applied. Inter-observer reliability was substantial with both VSS and the observer component of the POSAS (average measure intraclass coefficient correlation, 0.76 and 0.80, respectively). The observer component consistently showed significant correlations with patients' ratings for the parameters of the POSAS (all p -values<0.05). The correlation between subjective assessment using POSAS and objective assessment using spectrophotometer and ultrasonography showed low relationship. In facial skin graft scar assessment in skin cancer patients, the POSAS showed acceptable inter-observer reliability. This tool was more comprehensive and had higher correlation with patient's opinion.

  11. Cleft Palate; A Multidiscipline Approach.

    ERIC Educational Resources Information Center

    Stark, Richard B., Ed.

    Nineteen articles present a multidisciplinary approach to the management of facial clefts. The following subjects are discussed: the history of cleft lip and cleft palate surgery; cogenital defects; classification; the operation of a cleft palate clinic; physical examination of newborns with cleft lip and/or palate; nursing care; anesthesia;…

  12. Reverse correlating love: highly passionate women idealize their partner's facial appearance.

    PubMed

    Gunaydin, Gul; DeLong, Jordan E

    2015-01-01

    A defining feature of passionate love is idealization--evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner's facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships.

  13. Reverse Correlating Love: Highly Passionate Women Idealize Their Partner’s Facial Appearance

    PubMed Central

    Gunaydin, Gul; DeLong, Jordan E.

    2015-01-01

    A defining feature of passionate love is idealization—evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner’s facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships. PMID:25806540

  14. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  15. Variation in the cranial base orientation and facial skeleton in dry skulls sampled from three major populations.

    PubMed

    Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya

    2004-04-01

    The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.

  16. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  17. Flexible and inflexible task sets: asymmetric interference when switching between emotional expression, sex, and age classification of perceived faces.

    PubMed

    Schuch, Stefanie; Werheid, Katja; Koch, Iring

    2012-01-01

    The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.

  18. Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven

    2016-08-01

    Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).

  19. Active learning for solving the incomplete data problem in facial age classification by the furthest nearest-neighbor criterion.

    PubMed

    Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun

    2011-07-01

    Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.

  20. Effects of Objective 3-Dimensional Measures of Facial Shape and Symmetry on Perceptions of Facial Attractiveness.

    PubMed

    Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M

    2017-09-01

    Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  2. The facial nerve: anatomy and associated disorders for oral health professionals.

    PubMed

    Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir

    2018-04-01

    The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.

  3. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. The Not Face: A grammaticalization of facial expressions of emotion

    PubMed Central

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  5. Overview of Facial Plastic Surgery and Current Developments

    PubMed Central

    Chuang, Jessica; Barnes, Christian; Wong, Brian J. F.

    2016-01-01

    Facial plastic surgery is a multidisciplinary specialty largely driven by otolaryngology but includes oral maxillary surgery, dermatology, ophthalmology, and plastic surgery. It encompasses both reconstructive and cosmetic components. The scope of practice for facial plastic surgeons in the United States may include rhinoplasty, browlifts, blepharoplasty, facelifts, microvascular reconstruction of the head and neck, craniomaxillofacial trauma reconstruction, and correction of defects in the face after skin cancer resection. Facial plastic surgery also encompasses the use of injectable fillers, neural modulators (e.g., BOTOX Cosmetic, Allergan Pharmaceuticals, Westport, Ireland), lasers, and other devices aimed at rejuvenating skin. Facial plastic surgery is a constantly evolving field with continuing innovative advances in surgical techniques and cosmetic adjunctive technologies. This article aims to give an overview of the various procedures that encompass the field of facial plastic surgery and to highlight the recent advances and trends in procedures and surgical techniques. PMID:28824978

  6. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  7. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Principal component analysis of three-dimensional face shape: Identifying shape features that change with age.

    PubMed

    Kurosumi, M; Mizukoshi, K

    2018-05-01

    The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Teaching Emotion Recognition Skills to Children with Autism

    ERIC Educational Resources Information Center

    Ryan, Christian; Charragain, Caitriona Ni

    2010-01-01

    Autism is associated with difficulty interacting with others and an impaired ability to recognize facial expressions of emotion. Previous teaching programmes have not addressed weak central coherence. Emotion recognition training focused on components of facial expressions. The training was administered in small groups ranging from 4 to 7…

  11. ECTODERMAL WNT/β-CATENIN SIGNALING SHAPES THE MOUSE FACE

    PubMed Central

    Reid, Bethany S.; Yang, Hui; Melvin, Vida Senkus; Taketo, Makoto M.; Williams, Trevor

    2010-01-01

    The canonical Wnt/β-catenin pathway is an essential component of multiple developmental processes. To investigate the role of this pathway in the ectoderm during facial morphogenesis, we generated conditional β-catenin mouse mutants using a novel ectoderm-specific Cre recombinase transgenic line. Our results demonstrate that ablating or stabilizing β-catenin in the embryonic ectoderm causes dramatic changes in facial morphology. There are accompanying alterations in the expression of Fgf8 and Shh, key molecules that establish a signaling center critical for facial patterning, the frontonasal ectodermal zone (FEZ). These data indicate that Wnt/β-catenin signaling within the ectoderm is critical for facial development and further suggest that this pathway is an important mechanism for generating the diverse facial shapes of vertebrates during evolution. PMID:21087601

  12. Values of a Patient and Observer Scar Assessment Scale to Evaluate the Facial Skin Graft Scar

    PubMed Central

    Chae, Jin Kyung; Kim, Eun Jung; Park, Kun

    2016-01-01

    Background The patient and observer scar assessment scale (POSAS) recently emerged as a promising method, reflecting both observer's and patient's opinions in evaluating scar. This tool was shown to be consistent and reliable in burn scar assessment, but it has not been tested in the setting of skin graft scar in skin cancer patients. Objective To evaluate facial skin graft scar applied to POSAS and to compare with objective scar assessment tools. Methods Twenty three patients, who diagnosed with facial cutaneous malignancy and transplanted skin after Mohs micrographic surgery, were recruited. Observer assessment was performed by three independent rates using the observer component of the POSAS and Vancouver scar scale (VSS). Patient self-assessment was performed using the patient component of the POSAS. To quantify scar color and scar thickness more objectively, spectrophotometer and ultrasonography was applied. Results Inter-observer reliability was substantial with both VSS and the observer component of the POSAS (average measure intraclass coefficient correlation, 0.76 and 0.80, respectively). The observer component consistently showed significant correlations with patients' ratings for the parameters of the POSAS (all p-values<0.05). The correlation between subjective assessment using POSAS and objective assessment using spectrophotometer and ultrasonography showed low relationship. Conclusion In facial skin graft scar assessment in skin cancer patients, the POSAS showed acceptable inter-observer reliability. This tool was more comprehensive and had higher correlation with patient's opinion. PMID:27746642

  13. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.

    PubMed

    Balconi, Michela; Canavesio, Ylenia

    2016-01-01

    The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.

  14. The effect of motorcycle helmet type, components and fixation status on facial injury in Klang Valley, Malaysia: a case control study

    PubMed Central

    2014-01-01

    Background The effectiveness of helmets in reducing the risk of severe head injury in motorcyclists who were involved in a crash is well established. There is limited evidence however, regarding the extent to which helmets protect riders from facial injuries. The objective of this study was to determine the effect of helmet type, components and fixation status on the risk of facial injuries among Malaysian motorcyclists. Method 755 injured motorcyclists were recruited over a 12-month period in 2010–2011 in southern Klang Valley, Malaysia in this case control study. Of the 755 injured motorcyclists, 391participants (51.8%) sustained facial injuries (cases) while 364 (48.2%) participants were without facial injury (control). The outcomes of interest were facial injury and location of facial injury (i.e. upper, middle and lower face injuries). A binary logistic regression was conducted to examine the association between helmet characteristics and the outcomes, taking into account potential confounders such as age, riding position, alcohol and illicit substance use, type of colliding vehicle and type of collision. Helmet fixation was defined as the position of the helmet during the crash whether it was still secured on the head or had been dislodged. Results Helmet fixation was shown to have a greater effect on facial injury outcome than helmet type. Increased odds of adverse outcome was observed for the non-fixed helmet compared to the fixed helmet with adjusted odds ratio (AOR) = 2.10 (95% CI 1.41- 3.13) for facial injury; AOR = 6.64 (95% CI 3.71-11.91) for upper face injury; AOR = 5.36 (95% CI 3.05-9.44) for middle face injury; and AOR = 2.00 (95% CI 1.22-3.26) for lower face injury. Motorcyclists with visor damage were shown with AOR = 5.48 (95% CI 1.46-20.57) to have facial injuries compared to those with an undamaged visor. Conclusions A helmet of any type that is properly worn and remains fixed on the head throughout a crash will provide some form of protection against facial injury. Visor damage is a significant contributing factor for facial injury. These findings are discussed with reference to implications for policy and initiatives addressing helmet use and wearing behaviors. PMID:25086638

  15. Morphological quantitative criteria and aesthetic evaluation of eight female Han face types.

    PubMed

    Zhao, Qiming; Zhou, Rongrong; Zhang, XuDong; Sun, Huafeng; Lu, Xin; Xia, Dongsheng; Song, Mingli; Liang, Yang

    2013-04-01

    Human facial aesthetics relies on the classification of facial features and standards of attractiveness. However, there are no widely accepted quantitative criteria for facial attractiveness, particularly for Chinese Han faces. Establishing quantitative standards of attractiveness for facial landmarks within facial types is important for planning outcomes in cosmetic plastic surgery. The aim of this study was to determine quantitatively the criteria for attractiveness of eight female Chinese Han facial types. A photographic database of young Chinese Han women's faces was created. Photographed faces (450) were classified based on eight established types and scored for attractiveness. Measurements taken at seven standard facial landmarks and their relative proportions were analyzed for correlations to attractiveness scores. Attractive faces of each type were averaged via an image-morphing algorithm to generate synthetic facial types. Results were compared with the neoclassical ideal and data for Caucasians. Morphological proportions corresponding to the highest attractiveness scores for Chinese Han women differed from the neoclassical ideal. In our population of young, normal, healthy Han women, high attractiveness ratings were given to those with greater temporal width and pogonion-gonion distance, and smaller bizygomatic and bigonial widths. As attractiveness scores increased, the ratio of the temporal to bizygomatic widths increased, and the ratio of the distance between the pogonion and gonion to the bizygomatic width also increased slightly. Among the facial types, the oval and inverted triangular were the most attractive. The neoclassical ideal of attractiveness does not apply to Han faces. However, the proportion of faces considered attractive in this population was similar to that of Caucasian populations. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  16. Positive and negative symptom scores are correlated with activation in different brain regions during facial emotion perception in schizophrenia patients: a voxel-based sLORETA source activity study.

    PubMed

    Kim, Do-Won; Kim, Han-Sung; Lee, Seung-Hwan; Im, Chang-Hwan

    2013-12-01

    Schizophrenia is one of the most devastating of all mental illnesses, and has dimensional characteristics that include both positive and negative symptoms. One problem reported in schizophrenia patients is that they tend to show deficits in face emotion processing, on which negative symptoms are thought to have stronger influence. In this study, four event-related potential (ERP) components (P100, N170, N250, and P300) and their source activities were analyzed using EEG data acquired from 23 schizophrenia patients while they were presented with facial emotion picture stimuli. Correlations between positive and negative syndrome scale (PANSS) scores and source activations during facial emotion processing were calculated to identify the brain areas affected by symptom scores. Our analysis demonstrates that PANSS positive scores are negatively correlated with major areas of the left temporal lobule for early ERP components (P100, N170) and with the right middle frontal lobule for a later component (N250), which indicates that positive symptoms affect both early face processing and facial emotion processing. On the other hand, PANSS negative scores are negatively correlated with several clustered regions, including the left fusiform gyrus (at P100), most of which are not overlapped with regions showing correlations with PANSS positive scores. Our results suggest that positive and negative symptoms affect independent brain regions during facial emotion processing, which may help to explain the heterogeneous characteristics of schizophrenia. © 2013 Elsevier B.V. All rights reserved.

  17. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.

  18. Investigation of Severe Craniomaxillofacial Battle Injuries Sustained by U.S. Service Members: A Case Series

    DTIC Science & Technology

    2012-11-05

    advancement flaps and reconstructive advancement of lower lip and a buccal mucosa advancement flap to reconstruct maxillary lip. (C) Incision markings for...Maxillofac Surg 2007;65:1215 1218 6 Clark N, Birely B, Manson PN, et al. High energy ballistic and avulsive facial injuries: classification, patterns, and

  19. Emotion-Cognition Interactions in Schizophrenia: Implicit and Explicit Effects of Facial Expression

    ERIC Educational Resources Information Center

    Linden, Stefanie C.; Jackson, Margaret C.; Subramanian, Leena; Wolf, Claudia; Green, Paul; Healy, David; Linden, David E. J.

    2010-01-01

    Working memory (WM) and emotion classification are amongst the cognitive domains where specific deficits have been reported for patients with schizophrenia. In healthy individuals, the capacity of visual working memory is enhanced when the material to be retained is emotionally salient, particularly for angry faces. We investigated whether…

  20. The double auditory meatus--a rare first branchial cleft anomaly: clinical presentation and treatment.

    PubMed

    Stokroos, R J; Manni, J J

    2000-11-01

    To discuss the embryology, classification, clinical experience with, and management of first branchial cleft anomalies. Retrospective case review. Tertiary referral center. Patients with a first branchial cleft anomaly. Surgery or revision surgery. Classifications according to Work, Olsen, Chilla; previous diagnostic and therapeutic pitfalls; outcome of intervention (including facial nerve function). Between 1984 and 1999, first branchial cleft anomalies were diagnosed in 18 patients. Surgical treatment was the treatment of choice. The authors' approach in Work type I and type 2 lesions is described, and surgical aspects of revision surgery are discussed. The importance of early establishment of the relationship of the anomaly to the facial nerve is stressed. In 8 patients, previous surgical attempts had been undertaken without establishment of the diagnosis first. After intervention, the outcome was favorable. First branchial cleft anomalies occur sporadically in ordinary clinical practice. They may go unrecognized or may be mistaken for tumors or other inflammatory lesions of in the periauricular region. However, the distinct clinical features, which can be derived from embryologic development, usually lead to the correct diagnosis. This avoids both treatment delay and eventual failure.

  1. Making heads turn: the effect of familiarity and stimulus rotation on a gender-classification task.

    PubMed

    Stevenage, Sarah V; Osborne, Cara D

    2006-01-01

    Recent work has demonstrated that facial familiarity can moderate the influence of inversion when completing a configural processing task. Here, we examine whether familiarity interacts with intermediate angles of orientation in the same way that it interacts with inversion. Participants were asked to make a gender classification to familiar and unfamiliar faces shown at seven angles of orientation. Speed and accuracy of performance were assessed for stimuli presented (i) as whole faces and (ii) as internal features. When presented as whole faces, the task was easy, as revealed by ceiling levels of accuracy and no effect of familiarity or angle of rotation on response times. However, when stimuli were presented as internal features, an influence of facial familiarity was evident. Unfamiliar faces showed no increase in difficulty across angle of rotation, whereas familiar faces showed a marked increase in difficulty across angle, which was explained by significant linear and cubic trends in the data. Results were interpreted in terms of the benefit gained from a mental representation when face processing was impaired by stimulus rotation.

  2. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  3. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  4. Effects of task demands on the early neural processing of fearful and happy facial expressions

    PubMed Central

    Itier, Roxane J.; Neath-Tavares, Karly N.

    2017-01-01

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. PMID:28315309

  5. Sphenoid Sinus and Sphenoid Bone Fractures in Patients with Craniomaxillofacial Trauma

    PubMed Central

    Cantini Ardila, Jorge Ernesto; Mendoza, Miguel Ángel Rivera; Ortega, Viviana Gómez

    2013-01-01

    Background and Purpose Sphenoid bone fractures and sphenoid sinus fractures have a high morbidity due to its association with high-energy trauma. The purpose of this study is to describe individuals with traumatic injuries from different mechanisms and attempt to determine if there is any relationship between various isolated or combined fractures of facial skeleton and sphenoid bone and sphenoid sinus fractures. Methods We retrospectively studied hospital charts of all patients who reported to the trauma center at Hospital de San José with facial fractures from December 2009 to August 2011. All patients were evaluated by computed tomography scan and classified into low-, medium-, and high-energy trauma fractures, according to the classification described by Manson. Design This is a retrospective descriptive study. Results The study data were collected as part of retrospective analysis. A total of 250 patients reported to the trauma center of the study hospital with facial trauma. Thirty-eight patients were excluded. A total of 212 patients had facial fractures; 33 had a combination of sphenoid sinus and sphenoid bone fractures, and facial fractures were identified within this group (15.5%). Gender predilection was seen to favor males (77.3%) more than females (22.7%). The mean age of the patients was 37 years. Orbital fractures (78.8%) and maxillary fractures (57.5%) were found more commonly associated with sphenoid sinus and sphenoid bone fractures. Conclusions High-energy trauma is more frequently associated with sphenoid fractures when compared with medium- and low-energy trauma. There is a correlation between facial fractures and sphenoid sinus and sphenoid bone fractures. A more exhaustive multicentric case-control study with a larger sample and additional parameters will be essential to reach definite conclusions regarding the spectrum of fractures of the sphenoid bone associated with facial fractures. PMID:24436756

  6. The fractal characteristic of facial anthropometric data for developing PCA fit test panels for youth born in central China.

    PubMed

    Yang, Lei; Wei, Ran; Shen, Henggen

    2017-01-01

    New principal component analysis (PCA) respirator fit test panels had been developed for current American and Chinese civilian workers based on anthropometric surveys. The PCA panels used the first two principal components (PCs) obtained from a set of 10 facial dimensions. Although the PCA panels for American and Chinese subjects adopted the bivairate framework with two PCs, the number of the PCs retained in the PCA analysis was different between Chinese subjects and Americans. For the Chinese youth group, the third PC should be retained in the PCA analysis for developing new fit test panels. In this article, an additional number label (ANL) is used to explain the third PC in PCA analysis when the first two PCs are used to construct the PCA half-facepiece respirator fit test panel for Chinese group. The three-dimensional box-counting method is proposed to estimate the ANLs by calculating fractal dimensions of the facial anthropometric data of the Chinese youth. The linear regression coefficients of scale-free range R 2 are all over 0.960, which demonstrates that the facial anthropometric data of the Chinese youth has fractal characteristic. The youth subjects born in Henan province has an ANL of 2.002, which is lower than the composite facial anthropometric data of Chinese subjects born in many provinces. Hence, Henan youth subjects have the self-similar facial anthropometric characteristic and should use the particular ANL (2.002) as the important tool along with using the PCA panel. The ANL method proposed in this article not only provides a new methodology in quantifying the characteristics of facial anthropometric dimensions for any ethnic/racial group, but also extends the scope of PCA panel studies to higher dimensions.

  7. Optic nerve coloboma, Dandy-Walker malformation, microglossia, tongue hamartomata, cleft palate and apneic spells: an existing oral-facial-digital syndrome or a new variant?

    PubMed

    Toriello, Helga V; Lemire, Edmond G

    2002-01-01

    We report on a female infant with postaxial polydactyly of the hands, preaxial polydactyly of the right foot, cleft palate, microglossia and tongue hamartomata consistent with an oral-facial-digital syndrome (OFDS). The patient also had optic nerve colobomata, a Dandy-Walker malformation, micrognathia and apneic spells. This combination of clinical features has not been previously reported. This patient either expands the clinical features of one of the existing OFDS or represents a new variant. A review of the literature highlights the difficulties in making a specific diagnosis because of the different classification systems that exist in the literature.

  8. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  9. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  10. Neurofibromatosis of the head and neck: classification and surgical management.

    PubMed

    Latham, Kerry; Buchanan, Edward P; Suver, Daniel; Gruss, Joseph S

    2015-03-01

    Neurofibromatosis is common and presents with variable penetrance and manifestations in one in 2500 to one in 3000 live births. The management of these patients is often multidisciplinary because of the complexity of the disease. Plastic surgeons are frequently involved in the surgical management of patients with head and neck involvement. A 20-year retrospective review of patients treated surgically for head and neck neurofibroma was performed. Patients were identified according to International Classification of Diseases, Ninth Revision codes for neurofibromatosis and from the senior author's database. A total of 59 patients with head and neck neurofibroma were identified. These patients were categorized into five distinct, but not exclusive, categories to assist with diagnosis and surgical management. These categories included plexiform, cranioorbital, facial, neck, and parotid/auricular neurofibromatosis. A surgical classification system and clinical characteristics of head and neck neurofibromatosis is presented to assist practitioners with diagnosis and surgical management of this complex disease. The surgical management of the cranioorbital type is discussed in detail in 24 patients. The importance and safety of facial nerve dissection and preservation using intraoperative nerve monitoring were validated in 16 dissections in 15 patients. Massive involvement of the neck extending from the skull base to the mediastinum, frequently considered inoperable, has been safely resected by the use of access osteotomies of the clavicle and sternum, muscle takedown, and brachial plexus dissection and preservation using intraoperative nerve monitoring. Therapeutic, IV.

  11. A neurophysiological study of facial numbness in multiple sclerosis: Integration with clinical data and imaging findings.

    PubMed

    Koutsis, Georgios; Kokotis, Panagiotis; Papagianni, Aikaterini E; Evangelopoulos, Maria-Eleftheria; Kilidireas, Constantinos; Karandreas, Nikolaos

    2016-09-01

    To integrate neurophysiological findings with clinical and imaging data in a consecutive series of multiple sclerosis (MS) patients developing facial numbness during the course of an MS attack. Nine consecutive patients with MS and recent-onset facial numbness were studied clinically, imaged with routine MRI, and assessed neurophysiologically with trigeminal somatosensory evoked potential (TSEP), blink reflex (BR), masseter reflex (MR), facial nerve conduction, facial muscle and masseter EMG studies. All patients had unilateral facial hypoesthesia on examination and lesions in the ipsilateral pontine tegmentum on MRI. All patients had abnormal TSEPs upon stimulation of the affected side, excepting one that was tested following remission of numbness. BR was the second most sensitive neurophysiological method with 6/9 examinations exhibiting an abnormal R1 component. The MR was abnormal in 3/6 patients, always on the affected side. Facial conduction and EMG studies were normal in all patients but one. Facial numbness was always related to abnormal TSEPs. A concomitant R1 abnormality on BR allowed localization of the responsible pontine lesion, which closely corresponded with MRI findings. We conclude that neurophysiological assessment of MS patients with facial numbness is a sensitive tool, which complements MRI, and can improve lesion localization. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Reduced emotion processing efficiency in healthy males relative to females

    PubMed Central

    Rapport, Lisa J.; Briceno, Emily M.; Haase, Brennan D.; Vederman, Aaron C.; Bieliauskas, Linas A.; Welsh, Robert C.; Starkman, Monica N.; McInnis, Melvin G.; Zubieta, Jon-Kar; Langenecker, Scott A.

    2014-01-01

    This study examined sex differences in categorization of facial emotions and activation of brain regions supportive of those classifications. In Experiment 1, performance on the Facial Emotion Perception Test (FEPT) was examined among 75 healthy females and 63 healthy males. Females were more accurate in the categorization of fearful expressions relative to males. In Experiment 2, 3T functional magnetic resonance imaging data were acquired for a separate sample of 21 healthy females and 17 healthy males while performing the FEPT. Activation to neutral facial expressions was subtracted from activation to sad, angry, fearful and happy facial expressions. Although females and males demonstrated activation in some overlapping regions for all emotions, many regions were exclusive to females or males. For anger, sad and happy, males displayed a larger extent of activation than did females, and greater height of activation was detected in diffuse cortical and subcortical regions. For fear, males displayed greater activation than females only in right postcentral gyri. With one exception in females, performance was not associated with activation. Results suggest that females and males process emotions using different neural pathways, and these differences cannot be explained by performance variations. PMID:23196633

  13. Warriors and peacekeepers: testing a biosocial implicit leadership hypothesis of intergroup relations using masculine and feminine faces.

    PubMed

    Spisak, Brian R; Dekker, Peter H; Krüger, Max; van Vugt, Mark

    2012-01-01

    This paper examines the impact of facial cues on leadership emergence. Using evolutionary social psychology, we expand upon implicit and contingent theories of leadership and propose that different types of intergroup relations elicit different implicit cognitive leadership prototypes. It is argued that a biologically based hormonal connection between behavior and corresponding facial characteristics interacts with evolutionarily consistent social dynamics to influence leadership emergence. We predict that masculine-looking leaders are selected during intergroup conflict (war) and feminine-looking leaders during intergroup cooperation (peace). Across two experiments we show that a general categorization of leader versus nonleader is an initial implicit requirement for emergence, and at a context-specific level facial cues of masculinity and femininity contingently affect war versus peace leadership emergence in the predicted direction. In addition, we replicate our findings in Experiment 1 across culture using Western and East Asian samples. In Experiment 2, we also show that masculine-feminine facial cues are better predictors of leadership than male-female cues. Collectively, our results indicate a multi-level classification of context-specific leadership based on visual cues imbedded in the human face and challenge traditional distinctions of male and female leadership.

  14. Warriors and Peacekeepers: Testing a Biosocial Implicit Leadership Hypothesis of Intergroup Relations Using Masculine and Feminine Faces

    PubMed Central

    Spisak, Brian R.; Dekker, Peter H.; Krüger, Max; van Vugt, Mark

    2012-01-01

    This paper examines the impact of facial cues on leadership emergence. Using evolutionary social psychology, we expand upon implicit and contingent theories of leadership and propose that different types of intergroup relations elicit different implicit cognitive leadership prototypes. It is argued that a biologically based hormonal connection between behavior and corresponding facial characteristics interacts with evolutionarily consistent social dynamics to influence leadership emergence. We predict that masculine-looking leaders are selected during intergroup conflict (war) and feminine-looking leaders during intergroup cooperation (peace). Across two experiments we show that a general categorization of leader versus nonleader is an initial implicit requirement for emergence, and at a context-specific level facial cues of masculinity and femininity contingently affect war versus peace leadership emergence in the predicted direction. In addition, we replicate our findings in Experiment 1 across culture using Western and East Asian samples. In Experiment 2, we also show that masculine-feminine facial cues are better predictors of leadership than male-female cues. Collectively, our results indicate a multi-level classification of context-specific leadership based on visual cues imbedded in the human face and challenge traditional distinctions of male and female leadership. PMID:22276190

  15. Morcellized Omental Transfer for Severe HIV Facial Wasting

    PubMed Central

    Bohorquez, Marlon; Podbielski, Francis J.

    2013-01-01

    Background: A novel surgical technique to reconstruct facial wasting was developed for patients with severe human immunodeficiency virus lipoatrophy and no source of subcutaneous fat for donor material. Fourteen patients underwent endoscopic harvest of omentum, extracorporeal morcellation, and autologous transfer to the face. Methods: Omental fat was harvested using a standard 3-port laparoscopic technique. A mechanical tissue processor created morsels suitable for transfer. Gold-plated, multi-holed catheters delivered living particulate fat to the subcutaneous planes of the buccal, malar, lateral cheek, and temporal regions. Results were evaluated using standardized pre- and postoperative photographs for specific anatomic criteria found along the typical progression of the disease process. Results: Electron microscopy confirmed that morcellized fat retained intact cell walls and was appropriate for autologous transfer. Complications were minor and transient. Patients were discharged home within 24 hours. No patient required open laparotomy. Survival of the adipose grafts was deemed good to excellent in 13 of the 14 cases. Conclusions: Mechanically morcellized omental fat transfer provides a safe option to restore facial volume in those unusual patients with severe wasting and no available subcutaneous tissue for transfer. Consistent anatomic progression of facial wasting permits preoperative classification, counseling of patients, and postoperative evaluation of surgical improvement. PMID:25289268

  16. Role of electrical stimulation added to conventional therapy in patients with idiopathic facial (Bell) palsy.

    PubMed

    Tuncay, Figen; Borman, Pinar; Taşer, Burcu; Ünlü, İlhan; Samim, Erdal

    2015-03-01

    The aim of this study was to determine the efficacy of electrical stimulation when added to conventional physical therapy with regard to clinical and neurophysiologic changes in patients with Bell palsy. This was a randomized controlled trial. Sixty patients diagnosed with Bell palsy (39 right sided, 21 left sided) were included in the study. Patients were randomly divided into two therapy groups. Group 1 received physical therapy applying hot pack, facial expression exercises, and massage to the facial muscles, whereas group 2 received electrical stimulation treatment in addition to the physical therapy, 5 days per week for a period of 3 wks. Patients were evaluated clinically and electrophysiologically before treatment (at the fourth week of the palsy) and again 3 mos later. Outcome measures included the House-Brackmann scale and Facial Disability Index scores, as well as facial nerve latencies and amplitudes of compound muscle action potentials derived from the frontalis and orbicularis oris muscles. Twenty-nine men (48.3%) and 31 women (51.7%) with Bell palsy were included in the study. In group 1, 16 (57.1%) patients had no axonal degeneration and 12 (42.9%) had axonal degeneration, compared with 17 (53.1%) and 15 (46.9%) patients in group 2, respectively. The baseline House-Brackmann and Facial Disability Index scores were similar between the groups. At 3 mos after onset, the Facial Disability Index scores were improved similarly in both groups. The classification of patients according to House-Brackmann scale revealed greater improvement in group 2 than in group 1. The mean motor nerve latencies and compound muscle action potential amplitudes of both facial muscles were statistically shorter in group 2, whereas only the mean motor latency of the frontalis muscle decreased in group 1. The addition of 3 wks of daily electrical stimulation shortly after facial palsy onset (4 wks), improved functional facial movements and electrophysiologic outcome measures at the 3-mo follow-up in patients with Bell palsy. Further research focused on determining the most effective dosage and length of intervention with electrical stimulation is warranted.

  17. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.

  18. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    Scherer's Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes.

  19. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion

    PubMed Central

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R.

    2015-01-01

    Scherer’s Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes. PMID:26295338

  20. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  1. Dividing the Self: Distinct Neural Substrates of Task-Based and Automatic Self-Prioritization after Brain Damage

    ERIC Educational Resources Information Center

    Sui, Jie; Chechlacz, Magdalena; Humphreys, Glyn W.

    2012-01-01

    Facial self-awareness is a basic human ability dependent on a distributed bilateral neural network and revealed through prioritized processing of our own over other faces. Using non-prosopagnosic patients we show, for the first time, that facial self-awareness can be fractionated into different component processes. Patients performed two face…

  2. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  3. The effect of forced choice on facial emotion recognition: a comparison to open verbal classification of emotion labels

    PubMed Central

    Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.

    2013-01-01

    Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981

  4. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces - pain vigilance, pain-related catastrophizing, fear of pain--were assessed using self-report questionnaires. Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues.

  5. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed Central

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    BACKGROUND: It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. OBJECTIVES: Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. METHODS: Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces – pain vigilance, pain-related catastrophizing, fear of pain – were assessed using self-report questionnaires. RESULTS: Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. CONCLUSIONS: Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues. PMID:23717826

  6. Discrimination between Demodex folliculorum (Acari: Demodicidae) isolates from China and Spain based on mitochondrial cox1 sequences*

    PubMed Central

    Zhao, Ya-e; Ma, Jun-xian; Hu, Li; Wu, Li-ping; De Rojas, Manuel

    2013-01-01

    For a long time, classification of Demodex mites has been based mainly on their hosts and phenotypic characteristics. A new subspecies of Demodex folliculorum has been proposed, but not confirmed. Here, cox1 partial sequences of nine isolates of three Demodex species from two geographical sources (China and Spain) were studied to conduct molecular identification of D. folliculorum. Sequencing showed that the mitochondrial cox1 fragments of five D. folliculorum isolates from the facial skin of Chinese individuals were 429 bp long and that their sequence identity was 97.4%. The average sequence divergence was 1.24% among the five Chinese isolates, 0.94% between the two geographical isolate groups (China (5) and Spain (1)), and 2.15% between the two facial tissue sources (facial skin (6) and eyelids (1)). The genetic distance and rate of third-position nucleotide transition/transversion were 0.0125, 2.7 (3/1) among the five Chinese isolates, 0.0094, 3.1 (3/1) between the two geographical isolate groups, and 0.0217, 4.4 (3/1) between the two facial tissue sources. Phylogenetic trees showed that D. folliculorum from the two geographical isolate groups did not form sister clades, while those from different facial tissue sources did. According to the molecular characteristics, it appears that subspecies differentiation might not have occurred and that D. folliculorum isolates from the two geographical sources are of the same population. However, population differentiation might be occurring between isolates from facial skin and eyelids. PMID:24009203

  7. [Relationship between Work Ⅱ type of congenital first branchial cleft anomaly and facial nerve and surgical strategies].

    PubMed

    Zhang, B; Chen, L S; Huang, S L; Liang, L; Gong, X X; Wu, P N; Zhang, S Y; Luo, X N; Zhan, J D; Sheng, X L; Lu, Z M

    2017-10-07

    Objective: To investigate the relationship between Work Ⅱ type of congenital first branchial cleft anomaly (CFBCA) and facial nerve and discuss surgical strategies. Methods: Retrospective analysis of 37 patients with CFBCA who were treated from May 2005 to September 2016. Among 37 cases with CFBCA, 12 males and 25 females; 24 in the left and 13 in the right; the age at diagnosis was from 1 to 76 ( years, with a median age of 20, 24 cases with age of 18 years or less and 13 with age more than 18 years; duration of disease ranged from 1 to 10 years (median of 6 years); 4 cases were recurren after fistula resection. According to the classification of Olsen, all 37 cases were non-cyst (sinus or fistula). External fistula located over the mandibular angle in 28 (75.7%) cases and below the angle in 9 (24.3%) cases. Results: Surgeries were performed successfully in all the 37 cases. It was found that lesions located at anterior of the facial nerve in 13 (35.1%) cases, coursed between the branches in 3 cases (8.1%), and lied in the deep of the facial nerve in 21 (56.8%) cases. CFBCA in female with external fistula below mandibular angle and membranous band was more likely to lie deep of the facial nerve than in male with external fistula over the mandibular angle but without myringeal web. Conclusions: CFBCA in female patients with a external fistula located below the mandibular angle, non-cyst of Olsen or a myringeal web is more likely to lie deep of the facial nerve. Surgeons should particularly take care of the protection of facial nerve in these patients, if necessary, facial nerve monitoring technology can be used during surgery to complete resection of lesions.

  8. Association between ratings of facial attractivess and patients' motivation for orthognathic surgery.

    PubMed

    Vargo, J K; Gladwin, M; Ngan, P

    2003-02-01

    To compare the judgments of facial esthetics, defects and treatment needs between laypersons and professionals (orthodontists and oral surgeons) as predictors of patient's motivation for orthognathic surgery. Two panels of expert and naïve raters were asked to evaluate photographs of orthognathic surgery patients for facial esthetics, defects and treatment needs. Results were correlated with patients' motivation for surgery. Fifty-seven patients (37 females and 20 males) with a mean age of 26.0 +/- 6.7 years were interviewed prior to orthognathic surgery treatment. Three color photographs of each patient were evaluated by a panel of 14 experts and panel of 18 laypersons. Each panel of raters were asked to evaluate the facial morphology, facial attractiveness and recommend surgical treatment (independent variables). The dependent variable was the patient's motivation for orthognathic surgery. Outcome measure--Reliability of raters were analyzed using an unweighted Kappa coefficient and a Cronbach alpha coefficient. Correlations and regression analyses were used to quantify the relationship between variables. Expert raters provided reliable ratings of certain morphological features such as excessive gingival display and classification of mandibular facial form and position. Based on the facial photographs both expert and naïve raters agreed on facial attractiveness of patients. The best predictors of patients' motivation for surgery were the naïve profile attractiveness rating and the patients' expected change in self-consciousness. Expert raters provide more reliable ratings on certain morphologic features. However, the layperson's profile attractiveness rating and the patients' expected change in self-consciousness were the best predictors for patients' motivation for surgery. These data suggest that patients' motives for treatment are not necessarily related to objectively determined need. Patients' decision to seek treatment was more correlated to laypersons' rating of attractiveness because they see what other laypersons see, and are directly or indirectly affected by others reactions to their appearance. These findings may provide useful information for clinicians in counseling patients who seek orthognathic surgery.

  9. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  10. Facial duplication: case, review, and embryogenesis.

    PubMed

    Barr, M

    1982-04-01

    The craniofacial anatomy of an infant with facial duplication is described. There were four eyes, two noses, two maxillae, and one mandible. Anterior to the single pituitary the brain was duplicated and there was bilateral arhinencephaly. Portions of the brain were extruded into a large frontal encephalocele. Cases of symmetrical facial duplication reported in the literature range from two complete faces on a single head (diprosopus) to simple nasal duplication. The variety of patterns of duplication suggests that the doubling of facial components arises in several different ways: Forking of the notochord, duplication of the prosencephalon, duplication of the olfactory placodes, and duplication of maxillary and/or mandibular growth centers around the margins of the stomatodeal plate. Among reported cases, the female:male ratio is 2:1.

  11. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    PubMed

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  12. Alexithymia and the labeling of facial emotions: response slowing and increased motor and somatosensory processing

    PubMed Central

    2014-01-01

    Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094

  13. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  14. Is moral beauty different from facial beauty? Evidence from an fMRI study.

    PubMed

    Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald

    2015-06-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. [Emotion Recognition in Patients with Peripheral Facial Paralysis - A Pilot Study].

    PubMed

    Konnerth, V; Mohr, G; von Piekartz, H

    2016-02-01

    The perception of emotions is an important component in enabling human beings to social interaction in everyday life. Thus, the ability to recognize the emotions of the other one's mime is a key prerequisite for this. The following study aimed at evaluating the ability of subjects with 'peripheral facial paresis' to perceive emotions in healthy individuals. A pilot study was conducted in which 13 people with 'peripheral facial paresis' participated. This assessment included the 'Facially Expressed Emotion Labeling-Test' (FEEL-Test), the 'Facial-Laterality-Recognition Test' (FLR-Test) and the 'Toronto-Alexithymie-Scale 26' (TAS 26). The results were compared with data of healthy people from other studies. In contrast to healthy patients, the subjects with 'facial paresis' show more difficulties in recognizing basic emotions; however the results are not significant. The participants show a significant lower level of speed (right/left: p<0.001) concerning the perception of facial laterality compared to healthy people. With regard to the alexithymia, the tested group reveals significantly higher results (p<0.001) compared to the unimpaired people. The present pilot study does not prove any impact on this specific patient group's ability to recognize emotions and facial laterality. For future studies the research question should be verified in a larger sample size. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  18. Closed-loop control for cardiopulmonary management and intensive care unit sedation using digital imaging

    NASA Astrophysics Data System (ADS)

    Gholami, Behnood

    This dissertation introduces a new problem in the delivery of healthcare, which could result in lower cost and a higher quality of medical care as compared to the current healthcare practice. In particular, a framework is developed for sedation and cardiopulmonary management for patients in the intensive care unit. A method is introduced to automatically detect pain and agitation in nonverbal patients, specifically in sedated patients in the intensive care unit, using their facial expressions. Furthermore, deterministic as well as probabilistic expert systems are developed to suggest the appropriate drug dose based on patient sedation level. Patients in the intensive care unit who require mechanical ventilation due to acute respiratory failure also frequently require the administration of sedative agents. The need for sedation arises both from patient anxiety due to the loss of personal control and the unfamiliar and intrusive environment of the intensive care unit, and also due to pain or other variants of noxious stimuli. In this dissertation, we develop a rule-based expert system for cardiopulmonary management and intensive care unit sedation. Furthermore, we use probability theory to quantify uncertainty and to extend the proposed rule-based expert system to deal with more realistic situations. Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment stem from subjective assessment criteria, rather than quantifiable, measurable data. The relevance vector machine (RVM) classification technique is a Bayesian extension of the support vector machine (SVM) algorithm which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. In this dissertation, we use the RVM classification technique to distinguish pain from non-pain as well as assess pain intensity levels. We also correlate our results with the pain intensity assessed by expert and non-expert human examiners. Next, we consider facial expression recognition using an unsupervised learning framework. We show that different facial expressions reside on distinct subspaces if the manifold is unfolded. In particular, semi-definite embedding is used to reduce the dimensionality and unfold the manifold of facial images. Next, generalized principal component analysis is used to fit a series of subspaces to the data points and associate each data point to a subspace. Data points that belong to the same subspace are shown to belong to the same facial expression. In clinical intensive care unit practice sedative/analgesic agents are titrated to achieve a specific level of sedation. The level of sedation is currently based on clinical scoring systems. Examples include the motor activity assessment scale (MAAS), the Richmond agitation-sedation scale (RASS), and the modified Ramsay sedation scale (MRSS). In general, the goal of the clinician is to find the drug dose that maintains the patient at a sedation score corresponding to a moderately sedated state. In this research, we use pharmacokinetic and pharmacodynamic modeling to find an optimal drug dosing control policy to drive the patient to a desired MRSS score. Atrial fibrillation, a cardiac arrhythmia characterized by unsynchronized electrical activity in the atrial chambers of the heart, is a rapidly growing problem in modern societies. One treatment, referred to as catheter ablation, targets specific parts of the left atrium for radio frequency ablation using an intracardiac catheter. As a first step towards the general solution to the computer-assisted segmentation of the left atrial wall, we use shape learning and shape-based image segmentation to identify the endocardial wall of the left atrium in the delayed-enhancement magnetic resonance images. (Abstract shortened by UMI.)

  19. Effects of task demands on the early neural processing of fearful and happy facial expressions.

    PubMed

    Itier, Roxane J; Neath-Tavares, Karly N

    2017-05-15

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Responsibility and the sense of agency enhance empathy for pain

    PubMed Central

    Lepron, Evelyne; Causse, Michaël; Farrer, Chlöé

    2015-01-01

    Being held responsible for our actions strongly determines our moral judgements and decisions. This study examined whether responsibility also influences our affective reaction to others' emotions. We conducted two experiments in order to assess the effect of responsibility and of a sense of agency (the conscious feeling of controlling an action) on the empathic response to pain. In both experiments, participants were presented with video clips showing an actor's facial expression of pain of varying intensity. The empathic response was assessed with behavioural (pain intensity estimation from facial expressions and unpleasantness for the observer ratings) and electrophysiological measures (facial electromyography). Experiment 1 showed enhanced empathic response (increased unpleasantness for the observer and facial electromyography responses) as participants' degree of responsibility for the actor's pain increased. This effect was mainly accounted for by the decisional component of responsibility (compared with the execution component). In addition, experiment 2 found that participants' unpleasantness rating also increased when they had a sense of agency over the pain, while controlling for decision and execution processes. The findings suggest that increased empathy induced by responsibility and a sense of agency may play a role in regulating our moral conduct. PMID:25473014

  1. Choristoma of the middle ear: a component of a new syndrome?

    PubMed

    Buckmiller, L M; Brodie, H A; Doyle, K J; Nemzek, W

    2001-05-01

    Salivary choristoma of the middle ear is a rare entity. The authors report the 26th known case, which is unique in several respects: the patient had abnormalities of the first and second branchial arches, as well as the otic capsule and facial nerve in ways not yet reported. Our patient presented with bilateral preauricular pits, conchal bands, an ipsilateral facial palsy, and bilateral Mondini-type deformities. A review of the literature revealed salivary choristomas of the middle ear to be frequently associated with branchial arch abnormalities, most commonly the second, as well as abnormalities of the facial nerve. All 25 cases were reviewed and the results reported with respect to clinical presentation, associated abnormalities, operative findings, and hearing results. It has been proposed that choristoma of the middle ear may represent a component of a syndrome along with unilateral hearing loss, abnormalities of the incus and/or stapes, and anomalies of the facial nerve. Eighty-six percent of the reported patients with choristoma have three or four of the four criteria listed to designate middle ear salivary choristoma as part of a syndrome. In the remaining four patients, all of the structures were not assessed.

  2. Acromegaly determination using discriminant analysis of the three-dimensional facial classification in Taiwanese.

    PubMed

    Wang, Ming-Hsu; Lin, Jen-Der; Chang, Chen-Nen; Chiou, Wen-Ko

    2017-08-01

    The aim of this study was to assess the size, angles and positional characteristics of facial anthropometry between "acromegalic" patients and control subjects. We also identify possible facial soft tissue measurements for generating discriminant functions toward acromegaly determination in males and females for acromegaly early self-awareness. This is a cross-sectional study. Subjects participating in this study included 70 patients diagnosed with acromegaly (35 females and 35 males) and 140 gender-matched control individuals. Three-dimensional facial images were collected via a camera system. Thirteen landmarks were selected. Eleven measurements from the three categories were selected and applied, including five frontal widths, three lateral depths and three lateral angular measurements. Descriptive analyses were conducted using means and standard deviations for each measurement. Univariate and multivariate discriminant function analyses were applied in order to calculate the accuracy of acromegaly detection. Patients with acromegaly exhibit soft-tissue facial enlargement and hypertrophy. Frontal widths as well as lateral depth and angle of facial changes were evident. The average accuracies of all functions for female patient detection ranged from 80.0-91.40%. The average accuracies of all functions for male patient detection were from 81.0-94.30%. The greatest anomaly observed was evidenced in the lateral angles, with greater enlargement of "nasofrontal" angles for females and greater "mentolabial" angles for males. Additionally, shapes of the lateral angles showed changes. The majority of the facial measurements proved dynamic for acromegaly patients; however, it is problematic to detect the disease with progressive body anthropometric changes. The discriminant functions of detection developed in this study could help patients, their families, medical practitioners and others to identify and track progressive facial change patterns before the possible patients go to the hospital, especially the lateral "angles" which can be calculated by relative point-to-point changes derived from 2D lateral imagery without the 3D anthropometric measurements. This study tries to provide a novel and easy method to detect acromegaly when the patients start to have awareness of abnormal appearance because of facial measurement changes, and it also suggests that undiagnosed patients be urged to go to the hospital as soon as possible for acromegaly early diagnosis.

  3. Long-Face Dentofacial Deformities: Occlusion and Facial Esthetic Surgical Outcomes.

    PubMed

    Posnick, Jeffrey C; Liu, Samuel; Tremont, Timothy J

    2018-06-01

    The purpose of this study was to document malocclusion and facial dysmorphology in a series of patients with long face (LF) and chronic obstructive nasal breathing before treatment and the outcomes after bimaxillary orthognathic, osseous genioplasty, and intranasal surgery. A retrospective cohort study of patients with LF undergoing bimaxillary, chin, and intranasal (septoplasty and inferior turbinate reduction) surgery was implemented. Predictor variables were grouped into demographic, anatomic, operative, and longitudinal follow-up categories. Primary outcome variables were the initial postoperative occlusion achieved (T 2 ; 5 weeks after surgery) and the occulsion maintained long-term (>2 years after surgery). Six key occlusion parameters were assessed: overjet, overbite, coincidence of dental midlines, canine Angle classification, and molar vertical and transverse positions. The second outcome variable was the facial esthetic results. Photographs in 6 views were analyzed to document 7 facial contour characteristics. Seventy-eight patients met the inclusion criteria. Average age at surgery was 24 years (range, 13 to 54 yr). The study included 53 female patients (68%). Findings confirmed that occlusion after initial surgical healing (T 2 ) met the objectives for all parameters in 97% of patients (76 of 78). Most (68 of 78; 87%) maintained a favorable anterior and posterior occlusion for each parameter studied long-term (mean, 5 years 5 months). Facial contour deformities at presentation included prominent nose (63%), flat cheekbones (96%), flat midface (96%), weak chin (91%), obtuse neck-to-chin angle (56%), wide lip separation (95%), and excess maxillary dental show (99%). Correction of all pretreatment facial contour deformities was confirmed in 92% of patients after surgery. Long face patients with higher preoperative body mass index levels were more likely to have residual facial dysmorphology after surgery (P = .0009). Using orthognathic surgery techniques, patients with LF dentofacial deformity achieved the planned occlusion and most maintained the corrected occlusion long-term. In unoperated patients with LF, a "facial esthetic type" was identified. Orthognathic surgery proved effective in correcting associated facial dysmorphology in most patients. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  5. Safety and efficacy evaluation of tretinoin cream 0.02% for the reduction of photodamage: a pilot study.

    PubMed

    Kircik, Leon H

    2012-01-01

    Clinical studies as well as histologic data maintain that tretinoin improves the appearance of photodamage; however, the long-term benefits of tretinoin 0.02% in moderate to severe photodamage have not been established. We performed independent assessments to demonstrate the long-term safety and efficacy of tretinoin emollient cream 0.02% for moderate to severe facial photodamage. A single-center, open-label, single-group observational study followed 19 patients over 52 weeks. Efficacy assessments consisted of the Glogau Photodamage Classification Scale and severity grading of photodamage signs and symptoms. Facial photography and biopsies were taken from three subjects at baseline and final visits. Tolerability was assessed by the investigator. Twelve patients completed 52 weeks of treatment. Mean change in Glogau photodamage demonstrated statistically significant differences at 3, 6, 9, and 12 months (P<.0005). All patients with moderate to severe photodamage had improved to mild photodamage status by 9 months. Statistically significant improvements (P<.05) were observed at all time points for fine wrinkling, tactile roughness, and mottled hyperpigmentation as well as for lentigines at 6, 9, and 12 months and telangiectasia at 12 months. Biopsy samples revealed microscopic improvement in photodamage. Tretinoin cream 0.02% was generally well-tolerated, with few subjects experiencing adverse events. Our pilot study is limited by lack of control and the small study sample. Tretinoin cream 0.02% was safe and effective for moderate to severe photodamage of facial skin and demonstrated sustainable benefits over an entire year based on the clinically validated Glogau classification system and expert visual grading analysis.

  6. Reliable classification of facial phenotypic variation in craniofacial microsomia: a comparison of physical exam and photographs.

    PubMed

    Birgfeld, Craig B; Heike, Carrie L; Saltzman, Babette S; Leroux, Brian G; Evans, Kelly N; Luquetti, Daniela V

    2016-03-31

    Craniofacial microsomia is a common congenital condition for which children receive longitudinal, multidisciplinary team care. However, little is known about the etiology of craniofacial microsomia and few outcome studies have been published. In order to facilitate large, multicenter studies in craniofacial microsomia, we assessed the reliability of phenotypic classification based on photographs by comparison with direct physical examination. Thirty-nine children with craniofacial microsomia underwent a physical examination and photographs according to a standardized protocol. Three clinicians completed ratings during the physical examination and, at least a month later, using respective photographs for each participant. We used descriptive statistics for participant characteristics and intraclass correlation coefficients (ICCs) to assess reliability. The agreement between ratings on photographs and physical exam was greater than 80 % for all 15 categories included in the analysis. The ICC estimates were higher than 0.6 for most features. Features with the highest ICC included: presence of epibulbar dermoids, ear abnormalities, and colobomas (ICC 0.85, 0.81, and 0.80, respectively). Orbital size, presence of pits, tongue abnormalities, and strabismus had the lowest ICC, values (0.17 or less). There was not a strong tendency for either type of rating, physical exam or photograph, to be more likely to designate a feature as abnormal. The agreement between photographs and physical exam regarding the presence of a prior surgery was greater than 90 % for most features. Our results suggest that categorization of facial phenotype in children with CFM based on photographs is reliable relative to physical examination for most facial features.

  7. A new paradigm of oral cancer detection using digital infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Mukhopadhyay, S.; Dasgupta, A.; Banerjee, S.; Mukhopadhyay, S.; Patsa, S.; Ray, J. G.; Chaudhuri, K.

    2016-03-01

    Histopathology is considered the gold standard for oral cancer detection. But a major fraction of patient pop- ulation is incapable of accessing such healthcare facilities due to poverty. Moreover, such analysis may report false negatives when test tissue is not collected from exact cancerous location. The proposed work introduces a pioneering computer aided paradigm of fast, non-invasive and non-ionizing modality for oral cancer detection us- ing Digital Infrared Thermal Imaging (DITI). Due to aberrant metabolic activities in carcinogenic facial regions, heat signatures of patients are different from that of normal subjects. The proposed work utilizes asymmetry of temperature distribution of facial regions as principle cue for cancer detection. Three views of a subject, viz. front, left and right are acquired using long infrared (7:5 - 13μm) camera for analysing distribution of temperature. We study asymmetry of facial temperature distribution between: a) left and right profile faces and b) left and right half of frontal face. Comparison of temperature distribution suggests that patients manifest greater asymmetry compared to normal subjects. For classification, we initially use k-means and fuzzy k-means for unsupervised clustering followed by cluster class prototype assignment based on majority voting. Average classification accuracy of 91:5% and 92:8% are achieved by k-mean and fuzzy k-mean framework for frontal face. The corresponding metrics for profile face are 93:4% and 95%. Combining features of frontal and profile faces, average accuracies are increased to 96:2% and 97:6% respectively for k-means and fuzzy k-means framework.

  8. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  9. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    PubMed

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  10. More than mere mimicry? The influence of emotion on rapid facial reactions to faces.

    PubMed

    Moody, Eric J; McIntosh, Daniel N; Mann, Laura J; Weisser, Kimberly R

    2007-05-01

    Within a second of seeing an emotional facial expression, people typically match that expression. These rapid facial reactions (RFRs), often termed mimicry, are implicated in emotional contagion, social perception, and embodied affect, yet ambiguity remains regarding the mechanism(s) involved. Two studies evaluated whether RFRs to faces are solely nonaffective motor responses or whether emotional processes are involved. Brow (corrugator, related to anger) and forehead (frontalis, related to fear) activity were recorded using facial electromyography (EMG) while undergraduates in two conditions (fear induction vs. neutral) viewed fear, anger, and neutral facial expressions. As predicted, fear induction increased fear expressions to angry faces within 1000 ms of exposure, demonstrating an emotional component of RFRs. This did not merely reflect increased fear from the induction, because responses to neutral faces were unaffected. Considering RFRs to be merely nonaffective automatic reactions is inaccurate. RFRs are not purely motor mimicry; emotion influences early facial responses to faces. The relevance of these data to emotional contagion, autism, and the mirror system-based perspectives on imitation is discussed.

  11. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  12. Comparative Discussion on Psychophysiological Effect of Self-administered Facial Massage by Treatment Method

    NASA Astrophysics Data System (ADS)

    Nozawa, Akio; Takei, Yuya

    The aim of study was to quantitatively evaluate the effects of self-administered facial massage, which was done by hand or facial roller. In this study, the psychophysiological effects of facial massage were evaluated. The central nerves system and the autonomic nervous system were administered to evaluate physiological system. The central nerves system was assessed by Electroencephalogram (EEG). The autonomic nervous system were assessed by peripheral skin temperature(PST) and heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, the high-frequency components (HF) were evaluated. State-Trait Anxiety Inventory (STAI), Profile of Mood Status (POMS) and subjective sensory amount with Visual Analog Scale (VAS) were administered to evaluate psychological status. These results suggest that kept brain activity and had strong effects on stress alleviation.

  13. Do Valenced Odors and Trait Body Odor Disgust Affect Evaluation of Emotion in Dynamic Faces?

    PubMed

    Syrjänen, Elmeri; Liuzza, Marco Tullio; Fischer, Håkan; Olofsson, Jonas K

    2017-12-01

    Disgust is a core emotion evolved to detect and avoid the ingestion of poisonous food as well as the contact with pathogens and other harmful agents. Previous research has shown that multisensory presentation of olfactory and visual information may strengthen the processing of disgust-relevant information. However, it is not known whether these findings extend to dynamic facial stimuli that changes from neutral to emotionally expressive, or if individual differences in trait body odor disgust may influence the processing of disgust-related information. In this preregistered study, we tested whether a classification of dynamic facial expressions as happy or disgusted, and an emotional evaluation of these facial expressions, would be affected by individual differences in body odor disgust sensitivity, and by exposure to a sweat-like, negatively valenced odor (valeric acid), as compared with a soap-like, positively valenced odor (lilac essence) or a no-odor control. Using Bayesian hypothesis testing, we found evidence that odors do not affect recognition of emotion in dynamic faces even when body odor disgust sensitivity was used as moderator. However, an exploratory analysis suggested that an unpleasant odor context may cause faster RTs for faces, independent of their emotional expression. Our results further our understanding of the scope and limits of odor effects on facial perception affect and suggest further studies should focus on reproducibility, specifying experimental circumstances where odor effects on facial expressions may be present versus absent.

  14. Comparing Efficacy and Costs of Four Facial Fillers in Human Immunodeficiency Virus-Associated Lipodystrophy: A Clinical Trial.

    PubMed

    Vallejo, Alfonso; Garcia-Ruano, Angela A; Pinilla, Carmen; Castellano, Michele; Deleyto, Esther; Perez-Cano, Rosa

    2018-03-01

    The objective of this study was to evaluate and compare the safety and effectiveness of four different dermal fillers in the treatment of facial lipoatrophy secondary to human immunodeficiency virus. The authors conducted a clinical trial including 147 patients suffering from human immunodeficiency virus-induced lipoatrophy treated with Sculptra (poly-L-lactic acid), Radiesse (calcium hydroxylapatite), Aquamid (polyacrylamide), or autologous fat. Objective and subjective changes were evaluated during a 24-month follow-up. Number of sessions, total volume injected, and overall costs of treatment were also analyzed. A comparative cost-effectiveness analysis of the treatment options was performed. Objective improvement in facial lipoatrophy, assessed by the surgeon in terms of changes from baseline using the published classification of Fontdevila, was reported in 53 percent of the cases. Patient self-evaluation showed a general improvement after the use of facial fillers. Patients reported being satisfied with the treatment and with the reduced impact of lipodystrophy on their quality of life. Despite the nonsignificant differences observed in the number of sessions and volume, autologous fat showed significantly lower costs than all synthetic fillers (p < 0.05). Surgical treatment of human immunodeficiency virus-associated facial lipoatrophy using dermal fillers is a safe and effective procedure that improves the aesthetic appearance and the quality of life of patients. Permanent fillers and autologous fat achieve the most consistent results over time, with lipofilling being the most cost-effective procedure.

  15. A new atlas for the evaluation of facial features: advantages, limits, and applicability.

    PubMed

    Ritz-Timme, Stefanie; Gabriel, Peter; Obertovà, Zuzana; Boguslawski, Melanie; Mayer, F; Drabik, A; Poppa, Pasquale; De Angelis, Danilo; Ciaffi, Romina; Zanotti, Benedetta; Gibelli, Daniele; Cattaneo, Cristina

    2011-03-01

    Methods for the verification of the identity of offenders in cases involving video-surveillance images in criminal investigation events are currently under scrutiny by several forensic experts around the globe. The anthroposcopic, or morphological, approach based on facial features is the most frequently used by international forensic experts. However, a specific set of applicable features has not yet been agreed on by the experts. Furthermore, population frequencies of such features have not been recorded, and only few validation tests have been published. To combat and prevent crime in Europe, the European Commission funded an extensive research project dedicated to the optimization of methods for facial identification of persons on photographs. Within this research project, standardized photographs of 900 males between 20 and 31 years of age from Germany, Italy, and Lithuania were acquired. Based on these photographs, 43 facial features were described and evaluated in detail. These efforts led to the development of a new model of a morphologic atlas, called DMV atlas ("Düsseldorf Milan Vilnius," from the participating cities). This study is the first attempt at verifying the feasibility of this atlas as a preliminary step to personal identification by exploring the intra- and interobserver error. The analysis yielded mismatch percentages from 19% to 39%, which reflect the subjectivity of the approach and suggest caution in verifying personal identity only from the classification of facial features. Nonetheless, the use of the atlas leads to a significant improvement of consistency in the evaluation.

  16. Classifying dysmorphic syndromes by using artificial neural network based hierarchical decision tree.

    PubMed

    Özdemir, Merve Erkınay; Telatar, Ziya; Eroğul, Osman; Tunca, Yusuf

    2018-05-01

    Dysmorphic syndromes have different facial malformations. These malformations are significant to an early diagnosis of dysmorphic syndromes and contain distinctive information for face recognition. In this study we define the certain features of each syndrome by considering facial malformations and classify Fragile X, Hurler, Prader Willi, Down, Wolf Hirschhorn syndromes and healthy groups automatically. The reference points are marked on the face images and ratios between the points' distances are taken into consideration as features. We suggest a neural network based hierarchical decision tree structure in order to classify the syndrome types. We also implement k-nearest neighbor (k-NN) and artificial neural network (ANN) classifiers to compare classification accuracy with our hierarchical decision tree. The classification accuracy is 50, 73 and 86.7% with k-NN, ANN and hierarchical decision tree methods, respectively. Then, the same images are shown to a clinical expert who achieve a recognition rate of 46.7%. We develop an efficient system to recognize different syndrome types automatically in a simple, non-invasive imaging data, which is independent from the patient's age, sex and race at high accuracy. The promising results indicate that our method can be used for pre-diagnosis of the dysmorphic syndromes by clinical experts.

  17. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    NASA Astrophysics Data System (ADS)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  18. Relevance Vector Machine Learning for Neonate Pain Intensity Assessment Using Digital Imaging

    PubMed Central

    Gholami, Behnood; Tannenbaum, Allen R.

    2011-01-01

    Pain assessment in patients who are unable to verbally communicate is a challenging problem. The fundamental limitations in pain assessment in neonates stem from subjective assessment criteria, rather than quantifiable and measurable data. This often results in poor quality and inconsistent treatment of patient pain management. Recent advancements in pattern recognition techniques using relevance vector machine (RVM) learning techniques can assist medical staff in assessing pain by constantly monitoring the patient and providing the clinician with quantifiable data for pain management. The RVM classification technique is a Bayesian extension of the support vector machine (SVM) algorithm, which achieves comparable performance to SVM while providing posterior probabilities for class memberships and a sparser model. If classes represent “pure” facial expressions (i.e., extreme expressions that an observer can identify with a high degree of confidence), then the posterior probability of the membership of some intermediate facial expression to a class can provide an estimate of the intensity of such an expression. In this paper, we use the RVM classification technique to distinguish pain from nonpain in neonates as well as assess their pain intensity levels. We also correlate our results with the pain intensity assessed by expert and nonexpert human examiners. PMID:20172803

  19. Photoanthropometric face iridial proportions for age estimation: An investigation using features selected via a joint mutual information criterion.

    PubMed

    Borges, Díbio L; Vidal, Flávio B; Flores, Marta R P; Melani, Rodolfo F H; Guimarães, Marco A; Machado, Carlos E P

    2018-03-01

    Age assessment from images is of high interest in the forensic community because of the necessity to establish formal protocols to identify child pornography, child missing and abuses where visual evidences are the mostly admissible. Recently, photoanthropometric methods have been found useful for age estimation correlating facial proportions in image databases with samples of some age groups. Notwithstanding the advances, newer facial features and further analysis are needed to improve accuracy and establish larger applicability. In this investigation, frontal images of 1000 individuals (500 females, 500 males), equally distributed in five age groups (6, 10, 14, 18, 22 years old) were used in a 10 fold cross-validated experiment for three age thresholds classifications (<10, <14, <18 years old). A set of novel 40 features, based on a relation between landmark distances and the iris diameter, is proposed and joint mutual information is used to select the most relevant and complementary features for the classification task. In a civil image identification database with diverse ancestry, receiver operating characteristic (ROC) curves were plotted to verify accuracy, and the resultant AUCs achieved 0.971, 0.969, and 0.903 for the age classifications (<10, <14, <18 years old), respectively. These results add support to continuing research in age assessment from images using the metric approach. Still, larger samples are necessary to evaluate reliability in extensive conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Classification of rhinoplasties performed in an otorhinolaryngology referral center in Brazil.

    PubMed

    Nunes, Flávio Barbosa; Crosara, Paulo Fernando Tormin Borges; Oliveira, Isamara Simas de; Evangelista, Leandro Farias; Rodrigues, Danilo Santana; Becker, Helena Maria Gonçalves; Guimarães, Roberto Eustáquio Santos

    2014-01-01

    Facial plastic and reconstructive surgery involves the use of surgical procedures to achieve esthetic and functional improvement. It can be used for traumatic, congenital, or developmental injuries. Medicine, with an emphasis on facial plastic surgery, has made progress in several areas, including rhinoplasty, providing good long-term results and higher patient satisfaction. To evaluate cases of rhinoplasty and its subtypes in a referral center, and to understand the relevance of teaching rhinoplasty techniques in a service of otolaryngology residency. A retrospective study that assessed 325 rhinoplasties performed by third-year medical residents under the supervision of chief residents in charge of the Service of Facial Plastic Surgery in this hospital was conducted from January of 2003 to August of 2012. The Service Protocol included the following subtypes: functional, esthetic, post-traumatic, revision, and reconstructive rhinoseptoplasty. Of the rhinoplasties performed 184 (56.21%) were functional, 59 (18.15%) were post-traumatic, 27 were (8.30%) esthetic, 15 were (4.61%) reconstructive, and 40 (12.30%) were revision procedures. Functional rhinoseptoplasties were the most prevalent type, which highlights the relevance of teaching surgical techniques, not only for septoplasty, but also the inclusion of rhinoplasty techniques in teaching centers. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Evolution of middle-late Pleistocene human cranio-facial form: a 3-D approach.

    PubMed

    Harvati, Katerina; Hublin, Jean-Jacques; Gunz, Philipp

    2010-11-01

    The classification and phylogenetic relationships of the middle Pleistocene human fossil record remains one of the most intractable problems in paleoanthropology. Several authors have noted broad resemblances between European and African fossils from this period, suggesting a single taxon ancestral to both modern humans and Neanderthals. Others point out 'incipient' Neanderthal features in the morphology of the European sample and have argued for their inclusion in the Neanderthal lineage exclusively, following a model of accretionary evolution of Neanderthals. We approach these questions using geometric morphometric methods which allow the intuitive visualization and quantification of features previously described qualitatively. We apply these techniques to evaluate proposed cranio-facial 'incipient' facial, vault, and basicranial traits in a middle-late Pleistocene European hominin sample when compared to a sample of the same time depth from Africa. Some of the features examined followed the predictions of the accretion model and relate the middle Pleistocene European material to the later Neanderthals. However, although our analysis showed a clear separation between Neanderthals and early/recent modern humans and morphological proximity between European specimens from OIS 7 to 3, it also shows that the European hominins from the first half of the middle Pleistocene still shared most of their cranio-facial architecture with their African contemporaries. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Comparison of BMI, AHI, and apolipoprotein E ε4 (APOE-ε4) alleles among sleep apnea patients with different skeletal classifications.

    PubMed

    Roedig, Jason J; Phillips, Barbara A; Morford, Lorri A; Van Sickels, Joseph E; Falcao-Alencar, Gabriel; Fardo, David W; Hartsfield, James K; Ding, Xiuhua; Kluemper, G Thomas

    2014-04-15

    This case-control study investigated whether variations within the APOE-ε gene were associated with having a convex facial profile (skeletal Class II) compared to exhibiting a straight or concave facial profile (Class I or Class III) among patients with obstructive sleep apnea (OSA). Associations between the apnea-hypopnea index (AHI) and body mass index (BMI) scores for these OSA patients were also examined in the context of facial profile. OSA patients with an AHI ≥ 15 were recruited from a sleep clinic and classified by facial and dental occlusal relationships based on a profile facial analysis, lateral photographs, and dental examination. Saliva was collected as a source of DNA. The APOE-ε1-4 allele-defining single nucleotide polymorphisms (SNPs) rs429358 and rs7412 were genotyped. A χ(2) analysis was used to assess Hardy-Weinberg equilibrium and for association analysis (significance at p < 0.05). ANOVA and Fisher exact test were also used. Seventy-six Caucasian OSA patients participated in the study-25 Class II cases and 51 non-Class II cases. There was no association of the APOE-ε4 allele with facial profile among these OSA patients. Class II OSA patients had significantly lower BMIs (30.7 ± 5.78) than Class I (37.3 ± 6.14) or Class III (37.8 ± 6.17) patients (p < 0.001), although there was no statistical difference in AHI for Class II patients compared with other groups. OSA patients with Class II convex profile were more likely to have a lower BMI than those in other skeletal groups. In fact 20% of them were not obese, suggesting that a Class II convex profile may influence or be associated with OSA development independent of BMI.

  3. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion. Copyright © 2014 the authors 0270-6474/14/3412457-13$15.00/0.

  4. Social perception of morbidity in facial nerve paralysis.

    PubMed

    Li, Matthew Ka Ki; Niles, Navin; Gore, Sinclair; Ebrahimi, Ardalan; McGuinness, John; Clark, Jonathan Robert

    2016-08-01

    There are many patient-based and clinician-based scales measuring the severity of facial nerve paralysis and the impact on quality of life, however, the social perception of facial palsy has received little attention. The purpose of this pilot study was to measure the consequences of facial paralysis on selected domains of social perception and compare the social impact of paralysis of the different components. Four patients with typical facial palsies (global, marginal mandibular, zygomatic/buccal, and frontal) and 1 control were photographed. These images were each shown to 100 participants who subsequently rated variables of normality, perceived distress, trustworthiness, intelligence, interaction, symmetry, and disability. Statistical analysis was performed to compare the results among each palsy. Paralyzed faces were considered less normal compared to the control on a scale of 0 to 10 (mean, 8.6; 95% confidence interval [CI] = 8.30-8.86) with global paralysis (mean, 3.4; 95% CI = 3.08-3.80) rated as the most disfiguring, followed by the zygomatic/buccal (mean, 6.0; 95% CI = 5.68-6.37), marginal (mean, 6.5; 95% CI = 6.08-6.86), and then temporal palsies (mean, 6.9; 95% CI = 6.57-7.21). Similar trends were seen when analyzing these palsies for perceived distress, intelligence, and trustworthiness, using a random effects regression model. Our sample suggests that society views paralyzed faces as less normal, less trustworthy, and more distressed. Different components of facial paralysis are worse than others and surgical correction may need to be prioritized in an evidence-based manner with social morbidity in mind. © 2016 Wiley Periodicals, Inc. Head Neck 38:1158-1163, 2016. © 2016 Wiley Periodicals, Inc.

  5. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  6. Retention interval affects visual short-term memory encoding.

    PubMed

    Bankó, Eva M; Vidnyánszky, Zoltán

    2010-03-01

    Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.

  7. Fixation to features and neural processing of facial expressions in a gender discrimination task

    PubMed Central

    Neath, Karly N.; Itier, Roxane J.

    2017-01-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (~120 ms) for happy faces was seen at occipital sites and was sustained until ~350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ~150 ms until ~300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. PMID:26277653

  8. Body size and allometric variation in facial shape in children.

    PubMed

    Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt

    2018-02-01

    Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.

  9. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Impaired perception of facial emotion in developmental prosopagnosia.

    PubMed

    Biotti, Federica; Cook, Richard

    2016-08-01

    Developmental prosopagnosia (DP) is a neurodevelopmental condition characterised by difficulties recognising faces. Despite severe difficulties recognising facial identity, expression recognition is typically thought to be intact in DP; case studies have described individuals who are able to correctly label photographic displays of facial emotion, and no group differences have been reported. This pattern of deficits suggests a locus of impairment relatively late in the face processing stream, after the divergence of expression and identity analysis pathways. To date, however, there has been little attempt to investigate emotion recognition systematically in a large sample of developmental prosopagnosics using sensitive tests. In the present study, we describe three complementary experiments that examine emotion recognition in a sample of 17 developmental prosopagnosics. In Experiment 1, we investigated observers' ability to make binary classifications of whole-face expression stimuli drawn from morph continua. In Experiment 2, observers judged facial emotion using only the eye-region (the rest of the face was occluded). Analyses of both experiments revealed diminished ability to classify facial expressions in our sample of developmental prosopagnosics, relative to typical observers. Imprecise expression categorisation was particularly evident in those individuals exhibiting apperceptive profiles, associated with problems encoding facial shape accurately. Having split the sample of prosopagnosics into apperceptive and non-apperceptive subgroups, only the apperceptive prosopagnosics were impaired relative to typical observers. In our third experiment, we examined the ability of observers' to classify the emotion present within segments of vocal affect. Despite difficulties judging facial emotion, the prosopagnosics exhibited excellent recognition of vocal affect. Contrary to the prevailing view, our results suggest that many prosopagnosics do experience difficulties classifying expressions, particularly those with apperceptive profiles. These individuals may have difficulties forming view-invariant structural descriptions at an early stage in the face processing stream, before identity and expression pathways diverge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Following the time course of face gender and expression processing: a task-dependent ERP study.

    PubMed

    Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa

    2014-05-01

    The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. In the face of emotions: event-related potentials in supraliminal and subliminal facial expression recognition.

    PubMed

    Balconi, Michela; Lucchiari, Claudio

    2005-02-01

    Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.

  13. Facial and semantic emotional interference: A pilot study on the behavioral and cortical responses to the dual valence association task

    PubMed Central

    2011-01-01

    Background Integration of compatible or incompatible emotional valence and semantic information is an essential aspect of complex social interactions. A modified version of the Implicit Association Test (IAT) called Dual Valence Association Task (DVAT) was designed in order to measure conflict resolution processing from compatibility/incompatibly of semantic and facial valence. The DVAT involves two emotional valence evaluative tasks which elicits two forms of emotional compatible/incompatible associations (facial and semantic). Methods Behavioural measures and Event Related Potentials were recorded while participants performed the DVAT. Results Behavioural data showed a robust effect that distinguished compatible/incompatible tasks. The effects of valence and contextual association (between facial and semantic stimuli) showed early discrimination in N170 of faces. The LPP component was modulated by the compatibility of the DVAT. Conclusions Results suggest that DVAT is a robust paradigm for studying the emotional interference effect in the processing of simultaneous information from semantic and facial stimuli. PMID:21489277

  14. Are There Gender Differences in Emotion Comprehension? Analysis of the Test of Emotion Comprehension.

    PubMed

    Fidalgo, Angel M; Tenenbaum, Harriet R; Aznar, Ana

    2018-01-01

    This article examines whether there are gender differences in understanding the emotions evaluated by the Test of Emotion Comprehension (TEC). The TEC provides a global index of emotion comprehension in children 3-11 years of age, which is the sum of the nine components that constitute emotion comprehension: (1) recognition of facial expressions, (2) understanding of external causes of emotions, (3) understanding of desire-based emotions, (4) understanding of belief-based emotions, (5) understanding of the influence of a reminder on present emotional states, (6) understanding of the possibility to regulate emotional states, (7) understanding of the possibility of hiding emotional states, (8) understanding of mixed emotions, and (9) understanding of moral emotions. We used the answers to the TEC given by 172 English girls and 181 boys from 3 to 8 years of age. First, the nine components into which the TEC is subdivided were analysed for differential item functioning (DIF), taking gender as the grouping variable. To evaluate DIF, the Mantel-Haenszel method and logistic regression analysis were used applying the Educational Testing Service DIF classification criteria. The results show that the TEC did not display gender DIF. Second, when absence of DIF had been corroborated, it was analysed for differences between boys and girls in the total TEC score and its components controlling for age. Our data are compatible with the hypothesis of independence between gender and level of comprehension in 8 of the 9 components of the TEC. Several hypotheses are discussed that could explain the differences found between boys and girls in the belief component. Given that the Belief component is basically a false belief task, the differences found seem to support findings in the literature indicating that girls perform better on this task.

  15. The time course of individual face recognition: A pattern analysis of ERP signals.

    PubMed

    Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian

    2016-05-15

    An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Auto white balance method using a pigmentation separation technique for human skin color

    NASA Astrophysics Data System (ADS)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2017-02-01

    The human visual system maintains the perception of colors of an object across various light sources. Similarly, current digital cameras feature an auto white balance function, which estimates the illuminant color and corrects the color of a photograph as if the photograph was taken under a certain light source. The main subject in a photograph is often a person's face, which could be used to estimate the illuminant color. However, such estimation is adversely affected by differences in facial colors among individuals. The present paper proposes an auto white balance algorithm based on a pigmentation separation method that separates the human skin color image into the components of melanin, hemoglobin and shading. Pigment densities have a uniform property within the same race that can be calculated from the components of melanin and hemoglobin in the face. We, thus, propose a method that uses the subject's facial color in an image and is unaffected by individual differences in facial color among Japanese people.

  17. Artifacts produced during electrical stimulation of the vestibular nerve in cats. [autonomic nervous system components of motion sickness

    NASA Technical Reports Server (NTRS)

    Tang, P. C.

    1973-01-01

    Evidence is presented to indicate that evoked potentials in the recurrent laryngeal, the cervical sympathetic, and the phrenic nerve, commonly reported as being elicited by vestibular nerve stimulation, may be due to stimulation of structures other than the vestibular nerve. Experiments carried out in decerebrated cats indicated that stimulation of the petrous bone and not that of the vestibular nerve is responsible for the genesis of evoked potentials in the recurrent laryngeal and the cervical sympathetic nerves. The phrenic response to electrical stimulation applied through bipolar straight electrodes appears to be the result of stimulation of the facial nerve in the facial canal by current spread along the petrous bone, since stimulation of the suspended facial nerve evoked potentials only in the phrenic nerve and not in the recurrent laryngeal nerve. These findings indicate that autonomic components of motion sickness represent the secondary reactions and not the primary responses to vestibular stimulation.

  18. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  19. The impact of acne and facial post-inflammatory hyperpigmentation on quality of life and self-esteem of newly admitted Nigerian undergraduates

    PubMed Central

    Akinboro, Adeolu Oladayo; Ezejiofor, Ogochukwu Ifeanyi; Olanrewaju, Fatai Olatunde; Oripelaye, Mufutau Muphy; Olabode, Olatunde Peter; Ayodele, Olugbenga Edward; Onayemi, Emmanuel Olaniyi

    2018-01-01

    Background Acne and facial post-inflammatory hyperpigmentation are relatively common clinical conditions among adolescents and young adults, and inflict psychosocial injuries on sufferers. Objective To document the psychosocial and self-esteem implications of acne and facial hyperpigmentation on newly admitted undergraduates. Materials and methods A cross-sectional survey was conducted among 200 undergraduates. Demographics and clinical characteristics were obtained and acne was graded using the US Food and Drug Administration 5-category global system of acne classification. Participants completed the Cardiff Acne Disability Index (CADI) and the Rosenberg self-esteem scale (RSES), and data were analyzed using SPSS 20. Results Mean age of acne onset was 16.24 ± 3.32 years. There were 168 (84.0%) cases categorized as almost clear, 24 (12.0%) as mild acne, 4 (2.0%) as moderate acne and 4 (2.0%) as severe acne. Acne with facial hyperpigmentation, compared to acne without hyperpigmentation, was associated with significant level of anxiety in 30 participants (26.5% vs 10.3%, p=0.004) and emotional distress in 40 (35.4% vs 10.3%, p<0.001). Acne severity correlated with total CADI score but not with total RSES score. Quality of life (QoL) was significantly reduced among acne patients with facial hyperpigmentation (1.77±1.62, vs 1.07±1.02, p<0.001) compared to those without hyperpigmentation. Acne and facial hyperpigmentation was associated with social life interference, avoidance of public facilities, poor body image and self-esteem and perception of worse disease. There was no association between gender and QoL but acne was related to a reduction of self-worth. Low self-esteem was present in 1.5%, and severe acne was associated with an occasional feeling of uselessness in the male gender. Conclusion Acne with facial hyperpigmentation induces poorer QoL and self-esteem is impaired only in severe acne. Beyond the medical treatment of acne, dermatologists should routinely assess the QoL and give attention to treatment of facial post-inflammatory hyperpigmentation among people of color. PMID:29785134

  20. The impact of acne and facial post-inflammatory hyperpigmentation on quality of life and self-esteem of newly admitted Nigerian undergraduates.

    PubMed

    Akinboro, Adeolu Oladayo; Ezejiofor, Ogochukwu Ifeanyi; Olanrewaju, Fatai Olatunde; Oripelaye, Mufutau Muphy; Olabode, Olatunde Peter; Ayodele, Olugbenga Edward; Onayemi, Emmanuel Olaniyi

    2018-01-01

    Acne and facial post-inflammatory hyperpigmentation are relatively common clinical conditions among adolescents and young adults, and inflict psychosocial injuries on sufferers. To document the psychosocial and self-esteem implications of acne and facial hyperpigmentation on newly admitted undergraduates. A cross-sectional survey was conducted among 200 undergraduates. Demographics and clinical characteristics were obtained and acne was graded using the US Food and Drug Administration 5-category global system of acne classification. Participants completed the Cardiff Acne Disability Index (CADI) and the Rosenberg self-esteem scale (RSES), and data were analyzed using SPSS 20. Mean age of acne onset was 16.24 ± 3.32 years. There were 168 (84.0%) cases categorized as almost clear, 24 (12.0%) as mild acne, 4 (2.0%) as moderate acne and 4 (2.0%) as severe acne. Acne with facial hyperpigmentation, compared to acne without hyperpigmentation, was associated with significant level of anxiety in 30 participants (26.5% vs 10.3%, p =0.004) and emotional distress in 40 (35.4% vs 10.3%, p <0.001). Acne severity correlated with total CADI score but not with total RSES score. Quality of life (QoL) was significantly reduced among acne patients with facial hyperpigmentation (1.77±1.62, vs 1.07±1.02, p <0.001) compared to those without hyperpigmentation. Acne and facial hyperpigmentation was associated with social life interference, avoidance of public facilities, poor body image and self-esteem and perception of worse disease. There was no association between gender and QoL but acne was related to a reduction of self-worth. Low self-esteem was present in 1.5%, and severe acne was associated with an occasional feeling of uselessness in the male gender. Acne with facial hyperpigmentation induces poorer QoL and self-esteem is impaired only in severe acne. Beyond the medical treatment of acne, dermatologists should routinely assess the QoL and give attention to treatment of facial post-inflammatory hyperpigmentation among people of color.

  1. Transient emotional events and individual affective traits affect emotion recognition in a perceptual decision-making task.

    PubMed

    Qiao-Tasserit, Emilie; Garcia Quesada, Maria; Antico, Lia; Bavelier, Daphne; Vuilleumier, Patrik; Pichon, Swann

    2017-01-01

    Both affective states and personality traits shape how we perceive the social world and interpret emotions. The literature on affective priming has mostly focused on brief influences of emotional stimuli and emotional states on perceptual and cognitive processes. Yet this approach does not fully capture more dynamic processes at the root of emotional states, with such states lingering beyond the duration of the inducing external stimuli. Our goal was to put in perspective three different types of affective states (induced affective states, more sustained mood states and affective traits such as depression and anxiety) and investigate how they may interact and influence emotion perception. Here, we hypothesized that absorption into positive and negative emotional episodes generate sustained affective states that outlast the episode period and bias the interpretation of facial expressions in a perceptual decision-making task. We also investigated how such effects are influenced by more sustained mood states and by individual affect traits (depression and anxiety) and whether they interact. Transient emotional states were induced using movie-clips, after which participants performed a forced-choice emotion classification task with morphed facial expressions ranging from fear to happiness. Using a psychometric approach, we show that negative (vs. neutral) clips increased participants' propensity to classify ambiguous faces as fearful during several minutes. In contrast, positive movies biased classification toward happiness only for those clips perceived as most absorbing. Negative mood, anxiety and depression had a stronger effect than transient states and increased the propensity to classify ambiguous faces as fearful. These results provide the first evidence that absorption and different temporal dimensions of emotions have a significant effect on how we perceive facial expressions.

  2. Transient emotional events and individual affective traits affect emotion recognition in a perceptual decision-making task

    PubMed Central

    Garcia Quesada, Maria; Antico, Lia; Bavelier, Daphne; Vuilleumier, Patrik; Pichon, Swann

    2017-01-01

    Both affective states and personality traits shape how we perceive the social world and interpret emotions. The literature on affective priming has mostly focused on brief influences of emotional stimuli and emotional states on perceptual and cognitive processes. Yet this approach does not fully capture more dynamic processes at the root of emotional states, with such states lingering beyond the duration of the inducing external stimuli. Our goal was to put in perspective three different types of affective states (induced affective states, more sustained mood states and affective traits such as depression and anxiety) and investigate how they may interact and influence emotion perception. Here, we hypothesized that absorption into positive and negative emotional episodes generate sustained affective states that outlast the episode period and bias the interpretation of facial expressions in a perceptual decision-making task. We also investigated how such effects are influenced by more sustained mood states and by individual affect traits (depression and anxiety) and whether they interact. Transient emotional states were induced using movie-clips, after which participants performed a forced-choice emotion classification task with morphed facial expressions ranging from fear to happiness. Using a psychometric approach, we show that negative (vs. neutral) clips increased participants’ propensity to classify ambiguous faces as fearful during several minutes. In contrast, positive movies biased classification toward happiness only for those clips perceived as most absorbing. Negative mood, anxiety and depression had a stronger effect than transient states and increased the propensity to classify ambiguous faces as fearful. These results provide the first evidence that absorption and different temporal dimensions of emotions have a significant effect on how we perceive facial expressions. PMID:28151976

  3. Gender classification from video under challenging operating conditions

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Dong, Guozhu

    2014-06-01

    The literature is abundant with papers on gender classification research. However the majority of such research is based on the assumption that there is enough resolution so that the subject's face can be resolved. Hence the majority of the research is actually in the face recognition and facial feature area. A gap exists for gender classification under challenging operating conditions—different seasonal conditions, different clothing, etc.—and when the subject's face cannot be resolved due to lack of resolution. The Seasonal Weather and Gender (SWAG) Database is a novel database that contains subjects walking through a scene under operating conditions that span a calendar year. This paper exploits a subset of that database—the SWAG One dataset—using data mining techniques, traditional classifiers (ex. Naïve Bayes, Support Vector Machine, etc.) and traditional (canny edge detection, etc.) and non-traditional (height/width ratios, etc.) feature extractors to achieve high correct gender classification rates (greater than 85%). Another novelty includes exploiting frame differentials.

  4. [Partial facial duplication (a rare diprosopus): Case report and review of the literature].

    PubMed

    Es-Seddiki, A; Rkain, M; Ayyad, A; Nkhili, H; Amrani, R; Benajiba, N

    2015-12-01

    Diprosopus, or partial facial duplication, is a very rare congenital abnormality. It is a rare form of conjoined twins. Partial facial duplication may be symmetric or not and may involve the nose, the maxilla, the mandible, the palate, the tongue and the mouth. A male newborn springing from inbred parents was admitted at his first day of life for facial deformity. He presented with hypertelorism, 2 eyes, a tendency to nose duplication (flatted large nose, 2 columellae, 2 lateral nostrils separated in the midline by a third deformed hole), two mouths and a duplicated maxilla. Laboratory tests were normal. The cranio-facial CT confirmed the maxillary duplication. This type of cranio-facial duplication is a rare entity with about 35 reported cases in the literature. Our patient was similar to a rare case of living diprosopus reported by Stiehm in 1972. Diprosopus is often associated with abnormalities of the gastrointestinal tract, the central nervous system, the cardiovascular and respiratory systems and with a high incidence of cleft lip and palate. Surgical treatment consists in the resection of the duplicated components. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry

    PubMed Central

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won

    2017-01-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries. PMID:28523246

  6. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry.

    PubMed

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-05-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries.

  7. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology.

    PubMed

    Gronseth, Gary S; Paduga, Remia

    2012-11-27

    To review evidence published since the 2001 American Academy of Neurology (AAN) practice parameter regarding the effectiveness, safety, and tolerability of steroids and antiviral agents for Bell palsy. We searched Medline and the Cochrane Database of Controlled Clinical Trials for studies published since January 2000 that compared facial functional outcomes in patients with Bell palsy receiving steroids/antivirals with patients not receiving these medications. We graded each study (Class I-IV) using the AAN therapeutic classification of evidence scheme. We compared the proportion of patients recovering facial function in the treated group with the proportion of patients recovering facial function in the control group. Nine studies published since June 2000 on patients with Bell palsy receiving steroids/antiviral agents were identified. Two of these studies were rated Class I because of high methodologic quality. For patients with new-onset Bell palsy, steroids are highly likely to be effective and should be offered to increase the probability of recovery of facial nerve function (2 Class I studies, Level A) (risk difference 12.8%-15%). For patients with new-onset Bell palsy, antiviral agents in combination with steroids do not increase the probability of facial functional recovery by >7%. Because of the possibility of a modest increase in recovery, patients might be offered antivirals (in addition to steroids) (Level C). Patients offered antivirals should be counseled that a benefit from antivirals has not been established, and, if there is a benefit, it is likely that it is modest at best.

  8. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    PubMed

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  9. Three-dimensional gender differences in facial form of children in the North East of England.

    PubMed

    Bugaighis, Iman; Mattick, Clare R; Tiddeman, Bernard; Hobson, Ross

    2013-06-01

    The aim of the prospective cross-sectional morphometric study was to explore three dimensional (3D) facial shape and form (shape plus size) variation within and between 8- and 12-year-old Caucasian children; 39 males age-matched with 41 females. The 3D images were captured using a stereophotogrammeteric system, and facial form was recorded by digitizing 39 anthropometric landmarks for each scan. The x, y, z coordinates of each landmark were extracted and used to calculate linear and angular measurements. 3D landmark asymmetry was quantified using Generalized Procrustes Analysis (GPA) and an average face was constructed for each gender. The average faces were superimposed and differences were visualized and quantified. Shape variations were explored using GPA and PrincipalComponent Analysis. Analysis of covariance and Pearson correlation coefficients were used to explore gender differences and to determine any correlation between facial measurements and height or weight. Multivariate analysis was used to ascertain differences in facial measurements or 3D landmark asymmetry. There were no differences in height or weight between genders. There was a significant positive correlation between facial measurements and height and weight and statistically significant differences in linear facial width measurements between genders. These differences were related to the larger size of males rather than differences in shape. There were no age- or gender-linked significant differences in 3D landmark asymmetry. Shape analysis confirmed similarities between both males and females for facial shape and form in 8- to 12-year-old children. Any differences found were related to differences in facial size rather than shape.

  10. Facial expressions and pair bonds in hylobatids.

    PubMed

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony, or rates of behavioral synchrony. We found that FES was the strongest measure of hylobatid expressiveness and was significantly positively correlated with higher sociality index scores; however, FES showed no significant correlation with behavioral synchrony. No noticeable differences between pairs were found regarding rates of behavioral or territorial synchrony. Facial repertoire sizes and FES were not significantly correlated with rates of behavioral synchrony or territorial synchrony. Our study confirms an important role of facial expressions in maintaining pair bonds and coordinating activities in hylobatids. Data support the hypothesis that facial expressions and sociality have been linked in hylobatid and primate evolution. It is possible that larger facial repertoires may have contributed to strengthening pair bonds in primates, because richer facial repertoires provide more opportunities for FES which can effectively increase the "understanding" between partners through smoother coordination of interaction patterns. This study supports the social complexity hypothesis as the driving force for the evolution of complex communication signaling. © 2018 Wiley Periodicals, Inc.

  11. Early and late temporo-spatial effects of contextual interference during perception of facial affect.

    PubMed

    Frühholz, Sascha; Fehr, Thorsten; Herrmann, Manfred

    2009-10-01

    Contextual features during recognition of facial affect are assumed to modulate the temporal course of emotional face processing. Here, we simultaneously presented colored backgrounds during valence categorizations of facial expressions. Subjects incidentally learned to perceive negative, neutral and positive expressions within a specific colored context. Subsequently, subjects made fast valence judgments while presented with the same face-color-combinations as in the first run (congruent trials) or with different face-color-combinations (incongruent trials). Incongruent trials induced significantly increased response latencies and significantly decreased performance accuracy. Contextual incongruent information during processing of neutral expressions modulated the P1 and the early posterior negativity (EPN) both localized in occipito-temporal areas. Contextual congruent information during emotional face perception revealed an emotion-related modulation of the P1 for positive expressions and of the N170 and the EPN for negative expressions. Highest amplitude of the N170 was found for negative expressions in a negatively associated context and the N170 amplitude varied with the amount of overall negative information. Incongruent trials with negative expressions elicited a parietal negativity which was localized to superior parietal cortex and which most likely represents a posterior manifestation of the N450 as an indicator of conflict processing. A sustained activation of the late LPP over parietal cortex for all incongruent trials might reflect enhanced engagement with facial expression during task conditions of contextual interference. In conclusion, whereas early components seem to be sensitive to the emotional valence of facial expression in specific contexts, late components seem to subserve interference resolution during emotional face processing.

  12. Gender, age, and psychosocial context of the perception of facial esthetics.

    PubMed

    Tole, Nikoleta; Lajnert, Vlatka; Kovacevic Pavicic, Daniela; Spalj, Stjepan

    2014-01-01

    To explore the effects of gender, age, and psychosocial context on the perception of facial esthetics. The study included 1,444 Caucasian subjects aged 16 to 85 years. Two sets of color photographs illustrating 13 male and 13 female Caucasian facial type alterations, representing different skeletal and dentoalveolar components of sagittal maxillary-mandibular relationships, were used to estimate the facial profile attractiveness. The examinees graded the profiles based on a 0 to 10 numerical rating scale. The examinees graded the profiles of their own sex only from a social perspective, whereas opposite sex profiles were graded both from the social and emotional perspective separately. The perception of facial esthetics was found to be related to the gender, age, and psychosocial context of evaluation (p < 0.05). The most attractive profiles to men are the orthognathic female profile from the social perspective and the moderate bialveolar protrusion from the emotional perspective. The most attractive profile to women is the orthognathic male profile, when graded from the social aspect, and the mild bialveolar retrusion when graded from the emotional aspect. The age increase of the assessor results in a higher attractiveness grade. When planning treatment that modifies the facial profile, the clinician should bear in mind that the perception of facial profile esthetics is a complex phenomenon influenced by biopsychosocial factors. This study allows a better understanding of the concept of perception of facial esthetics that includes gender, age, and psychosocial context. © 2013 Wiley Periodicals, Inc.

  13. Surgical management of first branchial cleft anomaly presenting as infected retroauricular mass using a microscopic dissection technique.

    PubMed

    Chan, Kai-Chieh; Chao, Wei-Chieh; Wu, Che-Ming

    2012-01-01

    This is a detailed description of the clinical and anatomical presentation of the first branchial cleft anomaly presenting as retroauricular infected mass. Our experience with a microscopic dissection with control of the sinus lumen from within the cyst is also described. Between 2001 and 2008, patients with the final histologic diagnosis of first branchial cleft anomaly in the retroauricular area were managed with a microscopic dissection technique with control of the sinus lumen from within the cyst. Classifications were done in accordance with Work, Olsen, and Chilla. Outcomes measured intervention as a function of disease recurrence and complications including facial nerve function was used. Eight patients with a mean age of 14.2 years were enrolled, and this included 4 females and 4 males. Four type 1 and 4 type 2 lesions as per the Work's and Chilla's classification were found, and there were 5 sinuses, 2 fistulae, and 1 cyst according to Olsen's classification. All patients presented to the department with acute infection at the time of diagnosis. Five of the 8 patients had previous surgical treatment, 2 of those had up to 3 previous operations. None of the patients were complicated by disease recurrence or had surgical related complications (facial nerve paresis or paralysis, infection, canal stenosis) requiring reoperation with more than 1 year of follow-up. First branchial cleft anomaly presenting as retroauricular infected mass can be effectively treated by adopting a microscopic dissection technique with control of the sinus lumen from within the cyst. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. The Evolution of Complex Microsurgical Midface Reconstruction: A Classification Scheme and Reconstructive Algorithm.

    PubMed

    Alam, Daniel; Ali, Yaseen; Klem, Christopher; Coventry, Daniel

    2016-11-01

    Orbito-malar reconstruction after oncological resection represents one of the most challenging facial reconstructive procedures. Until the last few decades, rehabilitation was typically prosthesis based with a limited role for surgery. The advent of microsurgical techniques allowed large-volume tissue reconstitution from a distant donor site, revolutionizing the potential approaches to these defects. The authors report a novel surgery-based algorithm and a classification scheme for complete midface reconstruction with a foundation in the Gillies principles of like-to-like reconstruction and with a significant role of computer-aided virtual planning. With this approach, the authors have been able to achieve significantly better patient outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Unusual complication after genioplasty.

    PubMed

    Avelar, Rafael Linard; Sá, Carlos Diego Lopes; Esses, Diego Felipe Silveira; Becker, Otávio Emmel; Soares, Eduardo Costa Studart; de Oliveira, Rogerio Belle

    2014-01-01

    Facial beauty depends on shape, proportion, and harmony between the facial thirds. The chin is one of the most important components of the inferior third and has an important role on the definition of facial aesthetic and harmony in both frontal and lateral views. There are 2 principal therapeutic approaches that one can choose to treat mental deformities, alloplastic implants, and mental basilar ostectomy, also known as genioplasty. The latest is more commonly used because of great versatility in the correction of three-dimensional deformities of the chin and smaller taxes of postoperative complications. Possible transoperative and postoperative complications of genioplasty include mental nerve lesion, bleeding, damage to tooth roots, bone resorption of the mobilized segment, mandibular fracture, ptosis of the lower lip, and failure to stabilize the ostectomized segment. The study presents 2 cases of displacement of the osteotomized segment after genioplasty associated with facial trauma during postoperative orthognathic surgery followed by rare complications with no reports in the literature.

  16. Association between recovery from Bell's palsy and body mass index.

    PubMed

    Choi, S A; Shim, H S; Jung, J Y; Kim, H J; Kim, S H; Byun, J Y; Park, M S; Yeo, S G

    2017-06-01

    Although many factors have been found to be involved in recovery from Bell's palsy, no study has investigated the association between recovery from Bell's palsy and obesity. This study therefore evaluated the association between recovery from Bell's palsy and body mass index (BMI). Subjects were classified into five groups based on BMI (kg/m 2 ). Demographic and clinical characteristics were compared among these groups. Assessed factors included sex, age, time from paralysis to visiting a hospital, the presence of comorbidities such as diabetes mellitus and hypertension, degree of initial facial nerve paralysis by House-Brackmann (H-B) grade and neurophysiological testing, and final recovery rate. Based on BMI, 37 subjects were classified as underweight, 169 as normal weight, 140 as overweight, 155 as obese and 42 as severely obese. Classification of the degree of initial facial nerve paralysis as moderate or severe, according to H-B grade and electroneurography, showed no difference in severity of initial facial paralysis among the five groups (P > 0.05). However, the final recovery rate was significantly higher in the normal weight than in the underweight or obese group (P < 0.05). Obesity or underweight had no effect on the severity of initial facial paralysis, but the final recovery rate was lower in the obese and underweight groups than in the normal group. © 2016 John Wiley & Sons Ltd.

  17. Microsurgical Resection of Glomus Jugulare Tumors With Facial Nerve Reconstruction: 3-Dimensional Operative Video.

    PubMed

    Cândido, Duarte N C; de Oliveira, Jean Gonçalves; Borba, Luis A B

    2018-05-08

    Paragangliomas are tumors originating from the paraganglionic system (autonomic nervous system), mostly found at the region around the jugular bulb, for which reason they are also termed glomus jugulare tumors (GJT). Although these lesions appear to be histologically benign, clinically they present with great morbidity, especially due to invasion of nearby structures such as the lower cranial nerves. These are challenging tumors, as they need complex approaches and great knowledge of the skull base. We present the case of a 31-year-old woman, operated by the senior author, with a 1-year history of tinnitus, vertigo, and progressive hearing loss, that evolved with facial nerve palsy (House-Brackmann IV) 2 months before surgery. Magnetic resonance imaging and computed tomography scans demonstrated a typical lesion with intense flow voids at the jugular foramen region with invasion of the petrous and tympanic bone, carotid canal, and middle ear, and extending to the infratemporal fossa (type C2 of Fisch's classification for GJT). During the procedure the mastoid part of the facial nerve was identified involved by tumor and needed to be resected. We also describe the technique for nerve reconstruction, using an interposition graft from the great auricular nerve, harvested at the beginning of the surgery. We achieved total tumor resection with a remarkable postoperative course. The patient also presented with facial function after 6 months. The patient consented with publication of her images.

  18. Exploring the color feature power for psoriasis risk stratification and classification: A data mining paradigm.

    PubMed

    Shrivastava, Vimal K; Londhe, Narendra D; Sonawane, Rajendra S; Suri, Jasjit S

    2015-10-01

    A large percentage of dermatologist׳s decision in psoriasis disease assessment is based on color. The current computer-aided diagnosis systems for psoriasis risk stratification and classification lack the vigor of color paradigm. The paper presents an automated psoriasis computer-aided diagnosis (pCAD) system for classification of psoriasis skin images into psoriatic lesion and healthy skin, which solves the two major challenges: (i) fulfills the color feature requirements and (ii) selects the powerful dominant color features while retaining high classification accuracy. Fourteen color spaces are discovered for psoriasis disease analysis leading to 86 color features. The pCAD system is implemented in a support vector-based machine learning framework where the offline image data set is used for computing machine learning offline color machine learning parameters. These are then used for transformation of the online color features to predict the class labels for healthy vs. diseased cases. The above paradigm uses principal component analysis for color feature selection of dominant features, keeping the original color feature unaltered. Using the cross-validation protocol, the above machine learning protocol is compared against the standalone grayscale features with 60 features and against the combined grayscale and color feature set of 146. Using a fixed data size of 540 images with equal number of healthy and diseased, 10 fold cross-validation protocol, and SVM of polynomial kernel of type two, pCAD system shows an accuracy of 99.94% with sensitivity and specificity of 99.93% and 99.96%. Using a varying data size protocol, the mean classification accuracies for color, grayscale, and combined scenarios are: 92.85%, 93.83% and 93.99%, respectively. The reliability of the system in these three scenarios are: 94.42%, 97.39% and 96.00%, respectively. We conclude that pCAD system using color space alone is compatible to grayscale space or combined color and grayscale spaces. We validated our pCAD system against facial color databases and the results are consistent in accuracy and reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. The Effect of Secure Attachment State and Infant Facial Expressions on Childless Adults' Parental Motivation.

    PubMed

    Ding, Fangyuan; Zhang, Dajun; Cheng, Gang

    2016-01-01

    This study examined the association between infant facial expressions and parental motivation as well as the interaction between attachment state and expressions. Two-hundred eighteen childless adults (M age = 19.22, 118 males, 100 females) were recruited. Participants completed the Chinese version of the State Adult Attachment Measure and the E-prime test, which comprised three components (a) liking, the specific hedonic experience in reaction to laughing, neutral, and crying infant faces; (b) representational responding, actively seeking infant faces with specific expressions; and (c) evoked responding, actively retaining images of three different infant facial expressions. While the first component refers to the "liking" of infants, the second and third components entail the "wanting" of an infant. Random intercepts multilevel models with emotion nested within participants revealed a significant interaction between secure attachment state and emotion on both liking and representational response. A hierarchical regression analysis was conducted to examine the unique contributions of secure attachment state. Findings demonstrated that, after controlling for sex, anxious, and avoidant, secure attachment state positively predicted parental motivations (liking and wanting) in the neutral and crying conditions, but not the laughing condition. These findings demonstrate the significant role of secure attachment state in parental motivation, specifically when infants display uncertain and negative emotions.

  20. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    PubMed

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Effects of noninvasive facial nerve stimulation in the dog middle cerebral artery occlusion model of ischemic stroke.

    PubMed

    Borsody, Mark K; Yamada, Chisa; Bielawski, Dawn; Heaton, Tamara; Castro Prado, Fernando; Garcia, Andrea; Azpiroz, Joaquín; Sacristan, Emilio

    2014-04-01

    Facial nerve stimulation has been proposed as a new treatment of ischemic stroke because autonomic components of the nerve dilate cerebral arteries and increase cerebral blood flow when activated. A noninvasive facial nerve stimulator device based on pulsed magnetic stimulation was tested in a dog middle cerebral artery occlusion model. We used an ischemic stroke dog model involving injection of autologous blood clot into the internal carotid artery that reliably embolizes to the middle cerebral artery. Thirty minutes after middle cerebral artery occlusion, the geniculate ganglion region of the facial nerve was stimulated for 5 minutes. Brain perfusion was measured using gadolinium-enhanced contrast MRI, and ATP and total phosphate levels were measured using 31P spectroscopy. Separately, a dog model of brain hemorrhage involving puncture of the intracranial internal carotid artery served as an initial examination of facial nerve stimulation safety. Facial nerve stimulation caused a significant improvement in perfusion in the hemisphere affected by ischemic stroke and a reduction in ischemic core volume in comparison to sham stimulation control. The ATP/total phosphate ratio showed a large decrease poststroke in the control group versus a normal level in the stimulation group. The same stimulation administered to dogs with brain hemorrhage did not cause hematoma enlargement. These results support the development and evaluation of a noninvasive facial nerve stimulator device as a treatment of ischemic stroke.

  2. Facial transplantation: A concise update

    PubMed Central

    Barrera-Pulido, Fernando; Gomez-Cia, Tomas; Sicilia-Castro, Domingo; Garcia-Perla-Garcia, Alberto; Gacto-Sanchez, Purificacion; Hernandez-Guisado, Jose-Maria; Lagares-Borrego, Araceli; Narros-Gimenez, Rocio; Gonzalez-Padilla, Juan D.

    2013-01-01

    Objectives: Update on clinical results obtained by the first worldwide facial transplantation teams as well as review of the literature concerning the main surgical, immunological, ethical, and follow-up aspects described on facial transplanted patients. Study design: MEDLINE search of articles published on “face transplantation” until March 2012. Results: Eighteen clinical cases were studied. The mean patient age was 37.5 years, with a higher prevalence of men. Main surgical indication was gunshot injuries (6 patients). All patients had previously undergone multiple conventional surgical reconstructive procedures which had failed. Altogether 8 transplant teams belonging to 4 countries participated. Thirteen partial face transplantations and 5 full face transplantations have been performed. Allografts are varied according to face anatomical components and the amount of skin, muscle, bone, and other tissues included, though all were grafted successfully and remained viable without significant postoperative surgical complications. The patient with the longest follow-up was 5 years. Two patients died 2 and 27 months after transplantation. Conclusions: Clinical experience has demonstrated the feasibility of facial transplantation as a valuable reconstructive option, but it still remains considered as an experimental procedure with unresolved issues to settle down. Results show that from a clinical, technical, and immunological standpoint, facial transplantation has achieved functional, aesthetic, and social rehabilitation in severely facial disfigured patients. Key words:Face transplantation, composite tissue transplantation, face allograft, facial reconstruction, outcomes and complications of face transplantation. PMID:23229268

  3. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    PubMed Central

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  4. Sutural growth restriction and modern human facial evolution: an experimental study in a pig model

    PubMed Central

    Holton, Nathan E; Franciscus, Robert G; Nieves, Mary Ann; Marshall, Steven D; Reimer, Steven B; Southard, Thomas E; Keller, John C; Maddux, Scott D

    2010-01-01

    Facial size reduction and facial retraction are key features that distinguish modern humans from archaic Homo. In order to more fully understand the emergence of modern human craniofacial form, it is necessary to understand the underlying evolutionary basis for these defining characteristics. Although it is well established that the cranial base exerts considerable influence on the evolutionary and ontogenetic development of facial form, less emphasis has been placed on developmental factors intrinsic to the facial skeleton proper. The present analysis was designed to assess anteroposterior facial reduction in a pig model and to examine the potential role that this dynamic has played in the evolution of modern human facial form. Ten female sibship cohorts, each consisting of three individuals, were allocated to one of three groups. In the experimental group (n = 10), microplates were affixed bilaterally across the zygomaticomaxillary and frontonasomaxillary sutures at 2 months of age. The sham group (n = 10) received only screw implantation and the controls (n = 10) underwent no surgery. Following 4 months of post-surgical growth, we assessed variation in facial form using linear measurements and principal components analysis of Procrustes scaled landmarks. There were no differences between the control and sham groups; however, the experimental group exhibited a highly significant reduction in facial projection and overall size. These changes were associated with significant differences in the infraorbital region of the experimental group including the presence of an infraorbital depression and an inferiorly and coronally oriented infraorbital plane in contrast to a flat, superiorly and sagittally infraorbital plane in the control and sham groups. These altered configurations are markedly similar to important additional facial features that differentiate modern humans from archaic Homo, and suggest that facial length restriction via rigid plate fixation is a potentially useful model to assess the developmental factors that underlie changing patterns in craniofacial form associated with the emergence of modern humans. PMID:19929910

  5. Facial anthropometric differences among gender, ethnicity, and age groups.

    PubMed

    Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald

    2010-06-01

    The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational groups have facial features that differ significantly than those in construction. Statistically significant differences in facial anthropometric dimensions (P < 0.05) were noted between males and females, all racial/ethnic groups, and the subjects who were at least 45 years old when compared to workers between 18 and 29 years of age. These findings could be important to the design and manufacture of respirators, as well as employers responsible for supplying respiratory protective equipment to their employees.

  6. Treatment of Temporal Bone Fractures

    PubMed Central

    Diaz, Rodney C.; Cervenka, Brian; Brodie, Hilary A.

    2016-01-01

    Traumatic injury to the temporal bone can lead to significant morbidity or mortality and knowledge of the pertinent anatomy, pathophysiology of injury, and appropriate management strategies is critical for successful recovery and rehabilitation of such injured patients. Most temporal bone fractures are caused by motor vehicle accidents. Temporal bone fractures are best classified as either otic capsule sparing or otic capsule disrupting-type fractures, as such classification correlates well with risk of concomitant functional complications. The most common complications of temporal bone fractures are facial nerve injury, cerebrospinal fluid (CSF) leak, and hearing loss. Assessment of facial nerve function as soon as possible following injury greatly facilitates clinical decision making. Use of prophylactic antibiotics in the setting of CSF leak is controversial; however, following critical analysis and interpretation of the existing classic and contemporary literature, we believe its use is absolutely warranted. PMID:27648399

  7. Treatment of Temporal Bone Fractures.

    PubMed

    Diaz, Rodney C; Cervenka, Brian; Brodie, Hilary A

    2016-10-01

    Traumatic injury to the temporal bone can lead to significant morbidity or mortality and knowledge of the pertinent anatomy, pathophysiology of injury, and appropriate management strategies is critical for successful recovery and rehabilitation of such injured patients. Most temporal bone fractures are caused by motor vehicle accidents. Temporal bone fractures are best classified as either otic capsule sparing or otic capsule disrupting-type fractures, as such classification correlates well with risk of concomitant functional complications. The most common complications of temporal bone fractures are facial nerve injury, cerebrospinal fluid (CSF) leak, and hearing loss. Assessment of facial nerve function as soon as possible following injury greatly facilitates clinical decision making. Use of prophylactic antibiotics in the setting of CSF leak is controversial; however, following critical analysis and interpretation of the existing classic and contemporary literature, we believe its use is absolutely warranted.

  8. Emotion-independent face recognition

    NASA Astrophysics Data System (ADS)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  9. The superficial temporal fat pad and its ramifications for temporalis muscle construction in facial approximation.

    PubMed

    Stephan, Carl N; Devine, Matthew

    2009-10-30

    The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.

  10. Pediatric Temporal Bone Fractures: A 10-Year Experience.

    PubMed

    Wexler, Sonya; Poletto, Erica; Chennupati, Sri Kiran

    2017-11-01

    The aim of the study was to compare the traditional and newer temporal bone fracture classification systems and their reliability in predicting serious outcomes of hearing loss and facial nerve (FN) injury. We queried the medical record database for hospital visits from 2002 to 2013 related to the search term temporal. A total of 1144 records were identified, and of these, 46 records with documented temporal bone fractures were reviewed for patient age, etiology and classification of the temporal bone fracture, FN examination, and hearing status. Of these records, radiology images were available for 38 patients and 40 fractures. Thirty-eight patients with accessible radiologic studies, aged 10 months to 16 years, were identified as having 40 temporal bone fractures for which the otolaryngology service was consulted. Twenty fractures (50.0%) were classified as longitudinal, 5 (12.5%) as transverse, and 15 (37.5%) as mixed. Using the otic capsule sparing (OCS)/violating nomenclature, 32 (80.0%) of fractures were classified as OCS, 2 (5.0%) otic capsule violating (OCV), and 6 (15.0%) could not be classified using this system. The otic capsule was involved in 1 (5%) of the longitudinal fractures, none of the transverse fractures, and 1 (6.7%) of the mixed fractures. Sensorineural hearing loss was found in only 2 fractures (5.0%) and conductive hearing loss (CHL) in 6 fractures (15.0%). Two fractures (5.0%) had ipsilateral facial palsy but no visualized fracture through the course of the FN canal. Neither the longitudinal/transverse/mixed nor OCS/OCV classifications were predictors of sensorineural hearing loss (SNHL), CHL, or FN involvement by Fisher exact statistical analysis (for SNHL: P = 0.37 vs 0.16; for CHL: P = 0.71 vs 0.33; for FN: P = 0.62 vs 0.94, respectively). In this large pediatric series, neither classification system of longitudinal/transverse/mixed nor OCS/OCV was predictive of SNHL, CHL, or FN palsy. A more robust database of audiologic results would be helpful in demonstrating this relationship.

  11. Age and sex-related differences in 431 pediatric facial fractures at a level 1 trauma center.

    PubMed

    Hoppe, Ian C; Kordahi, Anthony M; Paik, Angie M; Lee, Edward S; Granick, Mark S

    2014-10-01

    Age and sex-related changes in the pattern of fractures and concomitant injuries observed in this patient population is helpful in understanding craniofacial development and the treatment of these unique injuries. The goal of this study was to examine all facial fractures occurring in a child and adolescent population (age 18 or less) at a trauma center to determine any age or sex-related variability amongst fracture patterns and concomitant injuries. All facial fractures occurring at a trauma center were collected over a 12-year period based on International Classification of Disease, rev. 9 codes. This was delimited to include only those patients 18 years of age or younger. Age, sex, mechanism, and fracture types were collected and analyzed. During this time period, there were 3147 patients with facial fractures treated at our institution, 353 of which were in children and adolescent patients. Upon further review 68 patients were excluded due to insufficient data for analysis, leaving 285 patients for review, with a total of 431 fractures. The most common etiology of injury was assault for males and motor vehicle accidents (MVA) for females. The most common fracture was of the mandible in males and of the orbit in females. The most common etiology in younger age groups includes falls and pedestrian struck. Older age groups exhibit a higher incidence of assault-related injuries. Younger age groups showed a propensity for orbital fractures as opposed to older age groups where mandibular fractures predominated. Intracranial hemorrhage was the most common concomitant injury across most age groups. The differences noted in etiology of injury, fracture patterns, and concomitant injuries between sexes and different age groups likely reflects the differing activities that each group engages in predominantly. In addition the growing facial skeleton offers varying degrees of protection to the cranial contents as force-absorbing mechanisms develop. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  12. A greater decline in female facial attractiveness during middle age reflects women’s loss of reproductive value

    PubMed Central

    Maestripieri, Dario; Klimczuk, Amanda C. E.; Traficonte, Daniel M.; Wilson, M. Claire

    2014-01-01

    Facial attractiveness represents an important component of an individual’s overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35–50 years and 51–65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources. PMID:24592253

  13. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. Functionally dissociated aspects in anterior and posterior electrocortical processing of facial threat.

    PubMed

    Schutter, Dennis J L G; de Haan, Edward H F; van Honk, Jack

    2004-06-01

    The angry facial expression is an important socially threatening stimulus argued to have evolved to regulate social hierarchies. In the present study, event-related potentials (ERP) were used to investigate the involvement and temporal dynamics of the frontal and parietal regions in the processing of angry facial expressions. Angry, happy and neutral faces were shown to eighteen healthy right-handed volunteers in a passive viewing task. Stimulus-locked ERPs were recorded from the frontal and parietal scalp sites. The P200, N300 and early contingent negativity variation (eCNV) components of the electric brain potentials were investigated. Analyses revealed statistical significant reductions in P200 amplitudes for the angry facial expression on both frontal and parietal electrode sites. Furthermore, apart from being strongly associated with the anterior P200, the N300 showed to be more negative for the angry facial expression in the anterior regions also. Finally, the eCNV was more pronounced over the parietal sites for the angry facial expressions. The present study demonstrated specific electrocortical correlates underlying the processing of angry facial expressions in the anterior and posterior brain sectors. The P200 is argued to indicate valence tagging by a fast and early detection mechanism. The lowered N300 with an anterior distribution for the angry facial expressions indicates more elaborate evaluation of stimulus relevance. The fact that the P200 and the N300 are highly correlated suggests that they reflect different stages of the same anterior evaluation mechanism. The more pronounced posterior eCNV suggests sustained attention to socially threatening information. Copyright 2004 Elsevier B.V.

  15. A greater decline in female facial attractiveness during middle age reflects women's loss of reproductive value.

    PubMed

    Maestripieri, Dario; Klimczuk, Amanda C E; Traficonte, Daniel M; Wilson, M Claire

    2014-01-01

    Facial attractiveness represents an important component of an individual's overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35-50 years and 51-65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources.

  16. AB119. Computer-aided facial recognition of Chinese individuals with 22q11.2 deletion-algorithm training using NIH atlas of human malformation syndromes from diverse population

    PubMed Central

    Mok, Gary Tsz Kin; Chung, Brian Hon-Yin

    2017-01-01

    Background 22q11.2 deletion syndrome (22q11.2DS) is a common genetic disorder with an estimated frequency of 1/4,000. It is a multi-systemic disorder with high phenotypic variability. Our previous work showed substantial under-diagnosis of 22q11.2DS as 1 in 10 adult patients with conotruncal defects were found to have 22q11.2DS. The National Institute of Health (NIH) has created an atlas of human malformation syndrome from diverse populations to provide an easy tool to assist clinician in diagnosing the syndromic across various populations. In this study, we seek to determine whether training the computer-aided facial recognition technology using images from ethnicity-matched patients from the NIH Atlas can improve the detection performance of this technology. Methods Clinical photographs of 16 Chinese subjects with molecularly confirmed 22q11.2DS, from the NIH atlas and its related publication were used for training the facial recognition technology. The system automatically localizes hundreds of facial fiducial points and takes measurements. The final classification is based on these measurements, as well as an estimated probability of subjects having 22q11.2DS based on the entire facial image. Clinical photographs of 7 patients with molecularly confirmed 22q11.2DS were obtained with informed consent and used for testing the performance in recognizing facial profiles of the Chinese subjects before and after training. Results All 7 test cases were improved in ranking and scoring after the software training. In 4 cases, 22q11.2DS did not appear as one possible syndrome match before the training; however, it appeared within the first 10 syndrome matches after training. Conclusions The present pilot data shows that this technology can be trained to recognize patients with 22q11.2DS. It also highlights the need to collect clinical photographs of patients from diverse populations to be used as resources for training the software which can lead to improvement of the performance of computer-aided facial recognition technology.

  17. Postparalysis Facial Synkinesis: Clinical Classification and Surgical Strategies

    PubMed Central

    Chang, Tommy Nai-Jen; Lu, Johnny Chuieng-Yi

    2015-01-01

    Background: Postparalysis facial synkinesis (PPFS) can occur after any cause of facial palsy. Current treatments are still inadequate. Surgical intervention, instead of Botox and rehabilitation only, for different degrees of PPFS was proposed. Methods: Seventy patients (43 females and 27 males) with PPFS were enrolled since 1986. They were divided into 4 patterns based on quality of smile and severity of synkinesis. Data collection for clinically various presentations was made: pattern I (n = 14) with good smile but synkinesis, pattern II (n = 17) with acceptable smile but dominant synkinesis, pattern III (n = 34) unacceptable smile and dominant synkinesis, and pattern IV (n = 5) poor smile and synkinesis. Surgical interventions were based on patterns of PPFS. Selective myectomy and some cosmetic procedures were performed for pattern I and II patients. Extensive myectomy and neurectomy of the involved muscles and nerves followed by functioning free-muscle transplantation for facial reanimation in 1- or 2-stage procedure were performed for pattern III and many pattern II patients. A classic 2-stage procedure for facial reanimation was performed for pattern IV patients. Results: Minor aesthetic procedures provided some help to pattern I patients but did not cure the problem. They all had short follow-up. Most patients in patterns II (14/17, 82%) and III (34/34, 100%) showed a significant improvement of eye and smile appearance and significant decrease in synkinetic movements following the aggressively major surgical intervention. Nearly, all of the patients treated by the authors did not need repeated botulinum toxin A injection nor require a profound rehabilitation program in the follow-up period. Conclusions: Treatment of PPFS remains a challenging problem. Major surgical reconstruction showed more promising and long-lasting results than botulinum toxin A and/or rehabilitation on pattern III and II patients. PMID:25878931

  18. Optimising ballistic facial coverage from military fragmenting munitions: a consensus statement.

    PubMed

    Breeze, J; Tong, D C; Powers, D; Martin, N A; Monaghan, A M; Evriviades, D; Combes, J; Lawton, G; Taylor, C; Kay, A; Baden, J; Reed, B; MacKenzie, N; Gibbons, A J; Heppell, S; Rickard, R F

    2017-02-01

    VIRTUS is the first United Kingdom (UK) military personal armour system to provide components that are capable of protecting the whole face from low velocity ballistic projectiles. Protection is modular, using a helmet worn with ballistic eyewear, a visor, and a mandibular guard. When all four components are worn together the face is completely covered, but the heat, discomfort, and weight may not be optimal in all types of combat. We organized a Delphi consensus group analysis with 29 military consultant surgeons from the UK, United States, Canada, Australia, and New Zealand to identify a potential hierarchy of functional facial units in order of importance that require protection. We identified the causes of those facial injuries that are hardest to reconstruct, and the most effective combinations of facial protection. Protection is required from both penetrating projectiles and burns. There was strong consensus that blunt injury to the facial skeleton was currently not a military priority. Functional units that should be prioritised are eyes and eyelids, followed consecutively by the nose, lips, and ears. Twenty-nine respondents felt that the visor was more important than the mandibular guard if only one piece was to be worn. Essential cover of the brain and eyes is achieved from all directions using a combination of helmet and visor. Nasal cover currently requires the mandibular guard unless the visor can be modified to cover it as well. Any such prototype would need extensive ergonomics and assessment of integration, as any changes would have to be acceptable to the people who wear them in the long term. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  19. U-series dating and classification of the Apidima 2 hominin from Mani Peninsula, Southern Greece.

    PubMed

    Bartsiokas, Antonis; Arsuaga, Juan Luis; Aubert, Maxime; Grün, Rainer

    2017-08-01

    Laser ablation U-series dating results on a human cranial bone fragment from Apidima, on the western cost of the Mani Peninsula, Southern Greece, indicate a minimum age of 160,000 years. The dated cranial fragment belongs to Apidima 2, which preserves the facial skeleton and a large part of the braincase, lacking the occipital bone. The morphology of the preserved regions of the cranium, and especially that of the facial skeleton, indicates that the fossil belongs to the Neanderthal clade. The dating of the fossil at a minimum age of 160,000 years shows that most of the Neanderthal traits were already present in the MIS 6 and perhaps earlier. This makes Apidima 2 the earliest known fossil with a clear Neanderthal facial morphology. Together with the nearby younger Neanderthal specimens from Lakonis and Kalamakia, the Apidima crania are of crucial importance for the evolution of Neanderthals in the area during the Middle to Late Pleistocene. It can be expected that systematic direct dating of the other human fossils from this area will elucidate our understanding of Neanderthal evolution and demise. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The Effects of Institutional Classification and Gender on Faculty Inclusion of Syllabus Components

    ERIC Educational Resources Information Center

    Doolittle, Peter E.; Lusk, Danielle L.

    2007-01-01

    The purpose of this research was to explore the effects that gender and institutional classification have on the inclusion of syllabus components. Course syllabi (N = 350) written by men and women from seven types of institutions, based on Carnegie classification, were sampled and evaluated for the presence of 26 syllabus components. The gender…

  1. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    PubMed

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.

  2. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  3. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  4. Unusual association of congenital middle ear cholesteatoma and first branchial cleft anomaly: management and embryological concepts.

    PubMed

    Nicollas, R; Tardivet, L; Bourlière-Najean, B; Sudre-Levillain, I; Triglia, J M

    2005-02-01

    To report two cases of an undescribed association of first branchial cleft fistula and middle ear congenital cholesteatoma and to discuss management and embryological hypothesis. Retrospective study and review of the literature Both patients were young girls free of past medical or surgical history. Surgical removal of the first cleft anomaly found in the two cases a fistula routing underneath the facial nerve. Both cholesteatomas were located in the hypotympanum, mesotympanum. In one case, an anatomical link between the two malformations was clearly identified with CT scan. The main embryological theories and classification are reviewed. A connection between Aimi's and Michaels' theories (congenital cholesteatoma) and Work classification might explain the reported clinical association.

  5. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  6. Human homogamy in facial characteristics: does a sexual-imprinting-like mechanism play a role?

    PubMed

    Nojo, Saori; Tamura, Satoshi; Ihara, Yasuo

    2012-09-01

    Human homogamy may be caused in part by individuals' preference for phenotypic similarities. Two types of preference can result in homogamy: individuals may prefer someone who is similar to themselves (self-referent phenotype matching) or to their parents (a sexual-imprinting-like mechanism). In order to examine these possibilities, we compare faces of couples and their family members in two ways. First, "perceived" similarity between a pair of faces is quantified as similarity ratings given to the pair. Second, "physical" similarity between two groups of faces is evaluated on the basis of correlations in principal component scores generated from facial measurements. Our results demonstrate a tendency to homogamy in facial characteristics and suggest that the tendency is due primarily to self-referent phenotype matching. Nevertheless, the presence of a sexual-imprinting-like effect is also partially indicated: whether individuals are involved in facial homogamy may be affected by their relationship with their parents during childhood.

  7. [Surgical correction of cleft palate].

    PubMed

    Kimura, F T; Pavia Noble, A; Soriano Padilla, F; Soto Miranda, A; Medellín Rodríguez, A

    1990-04-01

    This study presents a statistical review of corrective surgery for cleft palate, based on cases treated at the maxillo-facial surgery units of the Pediatrics Hospital of the Centro Médico Nacional and at Centro Médico La Raza of the National Institute of Social Security of Mexico, over a five-year period. Interdisciplinary management as performed at the Cleft-Palate Clinic, in an integrated approach involving specialists in maxillo-facial surgery, maxillar orthopedics, genetics, social work and mental hygiene, pursuing to reestablish the stomatological and psychological functions of children afflicted by cleft palate, is amply described. The frequency and classification of the various techniques practiced in that service are described, as well as surgical statistics for 188 patients, which include a total of 256 palate surgeries performed from March 1984 to March 1989, applying three different techniques and proposing a combination of them in a single surgical time, in order to avoid complementary surgery.

  8. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    PubMed

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Constriction of the buccal branch of the facial nerve produces unilateral craniofacial allodynia.

    PubMed

    Lewis, Susannah S; Grace, Peter M; Hutchinson, Mark R; Maier, Steven F; Watkins, Linda R

    2017-08-01

    Despite pain being a sensory experience, studies of spinal cord ventral root damage have demonstrated that motor neuron injury can induce neuropathic pain. Whether injury of cranial motor nerves can also produce nociceptive hypersensitivity has not been addressed. Herein, we demonstrate that chronic constriction injury (CCI) of the buccal branch of the facial nerve results in long-lasting, unilateral allodynia in the rat. An anterograde and retrograde tracer (3000MW tetramethylrhodamine-conjugated dextran) was not transported to the trigeminal ganglion when applied to the injury site, but was transported to the facial nucleus, indicating that this nerve branch is not composed of trigeminal sensory neurons. Finally, intracisterna magna injection of interleukin-1 (IL-1) receptor antagonist reversed allodynia, implicating the pro-inflammatory cytokine IL-1 in the maintenance of neuropathic pain induced by facial nerve CCI. These data extend the prior evidence that selective injury to motor axons can enhance pain to supraspinal circuits by demonstrating that injury of a facial nerve with predominantly motor axons is sufficient for neuropathic pain, and that the resultant pain has a neuroimmune component. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. [Hemodynamic activities in children with autism while imitating emotional facial expressions: a near-infrared spectroscopy study].

    PubMed

    Mori, Kenji; Mori, Tatsuo; Goji, Aya; Ito, Hiromichi; Toda, Yoshihiro; Fujii, Emiko; Miyazaki, Masahito; Harada, Masafumi; Kagami, Shoji

    2014-07-01

    To examine the hemodynamic activities in the frontal lobe, children with autistic disorder and matched controls underwent near-infrared spectroscopy (NIRS) while imitating emotional facial expressions. The subjects consisted of 10 boys with autistic disorder without mental retardation (9 - 14 years) and 10 normally developing boys (9 - 14 years). The concentrations of oxyhemoglobin (oxy-Hb) were measured with frontal probes using a 34-channel NIRS machine while the subjects imitated emotional facial expressions. The increments in the concentration of oxy-Hb in the pars opercularis of the inferior frontal gyrus in autistic subjects were significantly lower than those in the controls. However, the concentrations of oxy-Hb in this area were significantly elevated in autistic subjects after they were trained to imitate emotional facial expressions. The increments in the concentration of oxy-Hb in this area in autistic subjects were positively correlated with the scores on a test of labeling emotional facial expressions. The pars opercularis of the inferior frontal gyrus is an important component of the mirror neuron system. The present results suggest that mirror neurons could be activated by repeated imitation in children with autistic disorder.

  11. Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials.

    PubMed

    Zhang, Yan; Wei, Bin; Zhao, Peiqiong; Zheng, Minxiao; Zhang, Lili

    2016-06-01

    High rates of agreement in the judgment of facial attractiveness suggest universal principles of beauty. This study investigated gender differences in recognition memory processing of female facial attractiveness. Thirty-four Chinese heterosexual participants (17 females, 17 males) aged 18-24 years (mean age 21.63 ± 1.51 years) participated in the experiment which used event-related potentials (ERPs) based on a study-test paradigm. The behavioral data results showed that both men and women had significantly higher accuracy rates for attractive faces than for unattractive faces, but men reacted faster to unattractive faces. Gender differences on ERPs showed that attractive faces elicited larger early components such as P1, N170, and P2 in men than in women. The results indicated that the effects of recognition bias during memory processing modulated by female facial attractiveness are greater for men than women. Behavioral and ERP evidences indicate that men and women differ in their attentional adhesion to attractive female faces; different mating-related motives may guide the selective processing of attractive men and women. These findings establish a contribution of gender differences on female facial attractiveness during memory processing from an evolutionary perspective.

  12. Facial attractiveness.

    PubMed

    Little, Anthony C

    2014-11-01

    Facial attractiveness has important social consequences. Despite a widespread belief that beauty cannot be defined, in fact, there is considerable agreement across individuals and cultures on what is found attractive. By considering that attraction and mate choice are critical components of evolutionary selection, we can better understand the importance of beauty. There are many traits that are linked to facial attractiveness in humans and each may in some way impart benefits to individuals who act on their preferences. If a trait is reliably associated with some benefit to the perceiver, then we would expect individuals in a population to find that trait attractive. Such an approach has highlighted face traits such as age, health, symmetry, and averageness, which are proposed to be associated with benefits and so associated with facial attractiveness. This view may postulate that some traits will be universally attractive; however, this does not preclude variation. Indeed, it would be surprising if there existed a template of a perfect face that was not affected by experience, environment, context, or the specific needs of an individual. Research on facial attractiveness has documented how various face traits are associated with attractiveness and various factors that impact on an individual's judgments of facial attractiveness. Overall, facial attractiveness is complex, both in the number of traits that determine attraction and in the large number of factors that can alter attraction to particular faces. A fuller understanding of facial beauty will come with an understanding of how these various factors interact with each other. WIREs Cogn Sci 2014, 5:621-634. doi: 10.1002/wcs.1316 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  13. Age Differences in the Complexity of Emotion Perception.

    PubMed

    Kim, Seungyoun; Geren, Jennifer L; Knight, Bob G

    2015-01-01

    The current study examined age differences in the number of emotion components used in the judgment of emotion from facial expressions. Fifty-eight younger and 58 older adults were compared on the complexity of perception of emotion from standardized facial expressions that were either clear or ambiguous exemplars of emotion. Using an intra-individual factor analytic approach, results showed that older adults used more emotion components in perceiving emotion in faces than younger adults. Both age groups reported greater emotional complexity for the clear and prototypical emotional stimuli. Age differences in emotional complexity were more pronounced for the ambiguous expressions compared with the clear expressions. These findings demonstrate that older adults showed increased elaboration of emotion, particularly when emotion cues were subtle and provide support for greater emotion differentiation in older adulthood.

  14. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  15. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  16. Depth Structure from Asymmetric Shading Supports Face Discrimination

    PubMed Central

    Chen, Chien-Chung; Chen, Chin-Mei; Tyler, Christopher W.

    2013-01-01

    To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component. PMID:23457484

  17. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder.

    PubMed

    Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can

    2017-01-01

    Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated backward masking paradigm was used to investigate whether individuals with Internet Gaming Disorder (IGD) and normal controls (NC) exhibit different patterns in facial expression processing.• The results demonstrated that individuals with IGD respond differently to facial expressions compared with NC on a preattentive level. Behaviorally, individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results further showed (1) decreased amplitudes in the N170 component (an index of early face processing) in individuals with IGD when they process neutral expressions compared with happy expressions in the happy-neutral expressions context, whereas the NC exhibited comparable N170 amplitudes in response to these two expressions; (2) both the IGD and NC group demonstrated similar N170 amplitudes in response to sad and neutral faces in the sad-neutral expressions context.• The decreased amplitudes of N170 to neutral faces than happy faces in individuals with IGD might due to their less expectancies for neutral content in the happy-neutral expressions context, while individuals with IGD may have no different expectancies for neutral and sad faces in the sad-neutral expressions context.

  18. Assessment of Emotional Expressions after Full-Face Transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Barçın, Nur Ebru; Tombak, Kadriye; Çolak, Ömer Halil

    2017-01-01

    We assessed clinical features as well as sensory and motor recoveries in 3 full-face transplantation patients. A frequency analysis was performed on facial surface electromyography data collected during 6 basic emotional expressions and 4 primary facial movements. Motor progress was assessed using the wavelet packet method by comparison against the mean results obtained from 10 healthy subjects. Analyses were conducted on 1 patient at approximately 1 year after face transplantation and at 2 years after transplantation in the remaining 2 patients. Motor recovery was observed following sensory recovery in all 3 patients; however, the 3 cases had different backgrounds and exhibited different degrees and rates of sensory and motor improvements after transplant. Wavelet packet energy was detected in all patients during emotional expressions and primary movements; however, there were fewer active channels during expressions in transplant patients compared to healthy individuals, and patterns of wavelet packet energy were different for each patient. Finally, high-frequency components were typically detected in patients during emotional expressions, but fewer channels demonstrated these high-frequency components in patients compared to healthy individuals. Our data suggest that the posttransplantation recovery of emotional facial expression requires neural plasticity.

  19. Early adverse experiences and the neurobiology of facial emotion processing.

    PubMed

    Moulson, Margaret C; Fox, Nathan A; Zeanah, Charles H; Nelson, Charles A

    2009-01-01

    To examine the neurobiological consequences of early institutionalization, the authors recorded event-related potentials (ERPs) from 3 groups of Romanian children--currently institutionalized, previously institutionalized but randomly assigned to foster care, and family-reared children--in response to pictures of happy, angry, fearful, and sad facial expressions of emotion. At 3 assessments (baseline, 30 months, and 42 months), institutionalized children showed markedly smaller amplitudes and longer latencies for the occipital components P1, N170, and P400 compared to family-reared children. By 42 months, ERP amplitudes and latencies of children placed in foster care were intermediate between the institutionalized and family-reared children, suggesting that foster care may be partially effective in ameliorating adverse neural changes caused by institutionalization. The age at which children were placed into foster care was unrelated to their ERP outcomes at 42 months. Facial emotion processing was similar in all 3 groups of children; specifically, fearful faces elicited larger amplitude and longer latency responses than happy faces for the frontocentral components P250 and Nc. These results have important implications for understanding of the role that experience plays in shaping the developing brain.

  20. Age-related differences in morphological characteristics of residual skin surface components collected from the surface of facial skin of healthy male volunteers.

    PubMed

    Chalyk, N E; Bandaletova, T Y; Kyle, N H; Petyaev, I M

    2017-05-01

    Global increase of human longevity results in the emergence of previously ignored ageing-related problems. Skin ageing is a well-known phenomenon, but active search for scientific approaches to its prevention and even skin rejuvenation is a relatively new area. Although the structure and composition of the stratum corneum (SC), the superficial layer of epidermis, is well studied, relatively little is known about the residual skin surface components (RSSC) that overlay the surface of the SC. The aim of this study was to examine morphological features of RSSC samples non-invasively collected from the surface of human facial skin for the presence of age-related changes. Residual skin surface component samples were collected by swabbing from the surface of facial skin of 60 adult male volunteers allocated in two age groups: 34 subjects aged in the range 18-32 years and 26 subjects aged in the range 58-72 years. The collected samples were analysed microscopically: the size of the lipid droplets was measured; desquamated corneocytes and lipid crystals were counted; and microbial presence was assessed semi-quantitatively. Age-related changes were revealed for all studied components of the RSSC. There was a significant (P = 0.0126) decrease in the size of lipid droplets among older men. Likewise, significantly (P = 0.0252) lower numbers of lipid crystals were present in this group. In contrast, microbial presence in the RSSC was significantly (P = 0.0019) increased in the older group. There was also a trend towards more abundant corneocyte desquamation among older men, but the difference has not reached statistical significance (P = 0.0636). Non-invasively collected RSSC samples present an informative material for studying age-related changes on the surface of the SC of human facial skin. The results of this study confirm earlier observations regarding age-associated decline of the efficiency of the epidermal barrier and can be used for testing new approaches to skin ageing prevention. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Age and gender classification in the wild with unsupervised feature learning

    NASA Astrophysics Data System (ADS)

    Wan, Lihong; Huo, Hong; Fang, Tao

    2017-03-01

    Inspired by unsupervised feature learning (UFL) within the self-taught learning framework, we propose a method based on UFL, convolution representation, and part-based dimensionality reduction to handle facial age and gender classification, which are two challenging problems under unconstrained circumstances. First, UFL is introduced to learn selective receptive fields (filters) automatically by applying whitening transformation and spherical k-means on random patches collected from unlabeled data. The learning process is fast and has no hyperparameters to tune. Then, the input image is convolved with these filters to obtain filtering responses on which local contrast normalization is applied. Average pooling and feature concatenation are then used to form global face representation. Finally, linear discriminant analysis with part-based strategy is presented to reduce the dimensions of the global representation and to improve classification performances further. Experiments on three challenging databases, namely, Labeled faces in the wild, Gallagher group photos, and Adience, demonstrate the effectiveness of the proposed method relative to that of state-of-the-art approaches.

  2. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  3. Kruskal-Wallis-based computationally efficient feature selection for face recognition.

    PubMed

    Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz

    2014-01-01

    Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.

  4. A Features Selection for Crops Classification

    NASA Astrophysics Data System (ADS)

    Liu, Yifan; Shao, Luyi; Yin, Qiang; Hong, Wen

    2016-08-01

    The components of the polarimetric target decomposition reflect the differences of target since they linked with the scattering properties of the target and can be imported into SVM as the classification features. The result of decomposition usually concentrate on part of the components. Selecting a combination of components can reduce the features that importing into the SVM. The features reduction can lead to less calculation and targeted classification of one target when we classify a multi-class area. In this research, we import different combinations of features into the SVM and find a better combination for classification with a data of AGRISAR.

  5. Prevalence profile of odontogenic cysts and tumors on Brazilian sample after the reclassification of odontogenic keratocyst.

    PubMed

    Jaeger, Filipe; de Noronha, Mariana Saturnino; Silva, Maiza Luiza Vieira; Amaral, Márcio Bruno Figueiredo; Grossmann, Soraya de Mattos Carmago; Horta, Martinho Campolina Rebello; de Souza, Paulo Eduardo Alencar; de Aguiar, Maria Cássia Ferreira; Mesquita, Ricardo Alves

    2017-02-01

    The aim of this study was to evaluate the impact of the reclassification of odontogenic keratocyst (OKC) as a tumor on the prevalence profile of odontogenic cysts (OCs) and odontogenic tumors (OTs). Two referral Oral and Maxillofacial Pathology services in Brazil were evaluated. All cases diagnosed as OCs or OTs were selected and classified according to the 1992 WHO-classification (cases before 2005 WHO classification of tumors excluding OKC) and the 2005 WHO classification of tumors, going forward including cases of odontogenic keratocyst tumor (KCOT). The frequency and prevalence of OCs and OTs were compared before and after the reclassification. Among 27,854 oral biopsies, 4920 (17.66%) were OCs and 992 (3.56%) were OTs. The prevalence of OTs before 2005 WHO classification of tumors was 2.04%, while the prevalence after 2005 WHO classification was 11.51% (p < 0.0001). Before 2006, the most frequent tumor diagnosed was odontoma with 194 cases (39.67%), and after 2005 WHO classification of tumors the KCOT was the most frequent with 207 cases (41.07%). The increase in the prevalence of OTs after 2005 WHO is related to the improvement of pathology services and to the inclusion of KCOT in the OTs group. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Characterization of small-to-medium head-and-face dimensions for developing respirator fit test panels and evaluating fit of filtering facepiece respirators with different faceseal design

    PubMed Central

    Lin, Yi-Chun

    2017-01-01

    A respirator fit test panel (RFTP) with facial size distribution representative of intended users is essential to the evaluation of respirator fit for new models of respirators. In this study an anthropometric survey was conducted among youths representing respirator users in mid-Taiwan to characterize head-and-face dimensions key to RFTPs for application to small-to-medium facial features. The participants were fit-tested for three N95 masks of different facepiece design and the results compared to facial size distribution specified in the RFTPs of bivariate and principal component analysis design developed in this study to realize the influence of facial characteristics to respirator fit in relation to facepiece design. Nineteen dimensions were measured for 206 participants. In fit testing the qualitative fit test (QLFT) procedures prescribed by the U.S. Occupational Safety and Health Administration were adopted. As the results show, the bizygomatic breadth of the male and female participants were 90.1 and 90.8% of their counterparts reported for the U.S. youths (P < 0.001), respectively. Compared to the bivariate distribution, the PCA design better accommodated variation in facial contours among different respirator user groups or populations, with the RFTPs reported in this study and from literature consistently covering over 92% of the participants. Overall, the facial fit of filtering facepieces increased with increasing facial dimensions. The total percentages of the tests wherein the final maneuver being completed was “Moving head up-and-down”, “Talking” or “Bending over” in bivariate and PCA RFTPs were 13.3–61.9% and 22.9–52.8%, respectively. The respirators with a three-panel flat fold structured in the facepiece provided greater fit, particularly when the users moved heads. When the facial size distribution in a bivariate RFTP did not sufficiently represent petite facial size, the fit testing was inclined to overestimate the general fit, thus for small-to-medium facial dimensions a distinct RFTP should be considered. PMID:29176833

  7. Characterization of small-to-medium head-and-face dimensions for developing respirator fit test panels and evaluating fit of filtering facepiece respirators with different faceseal design.

    PubMed

    Lin, Yi-Chun; Chen, Chen-Peng

    2017-01-01

    A respirator fit test panel (RFTP) with facial size distribution representative of intended users is essential to the evaluation of respirator fit for new models of respirators. In this study an anthropometric survey was conducted among youths representing respirator users in mid-Taiwan to characterize head-and-face dimensions key to RFTPs for application to small-to-medium facial features. The participants were fit-tested for three N95 masks of different facepiece design and the results compared to facial size distribution specified in the RFTPs of bivariate and principal component analysis design developed in this study to realize the influence of facial characteristics to respirator fit in relation to facepiece design. Nineteen dimensions were measured for 206 participants. In fit testing the qualitative fit test (QLFT) procedures prescribed by the U.S. Occupational Safety and Health Administration were adopted. As the results show, the bizygomatic breadth of the male and female participants were 90.1 and 90.8% of their counterparts reported for the U.S. youths (P < 0.001), respectively. Compared to the bivariate distribution, the PCA design better accommodated variation in facial contours among different respirator user groups or populations, with the RFTPs reported in this study and from literature consistently covering over 92% of the participants. Overall, the facial fit of filtering facepieces increased with increasing facial dimensions. The total percentages of the tests wherein the final maneuver being completed was "Moving head up-and-down", "Talking" or "Bending over" in bivariate and PCA RFTPs were 13.3-61.9% and 22.9-52.8%, respectively. The respirators with a three-panel flat fold structured in the facepiece provided greater fit, particularly when the users moved heads. When the facial size distribution in a bivariate RFTP did not sufficiently represent petite facial size, the fit testing was inclined to overestimate the general fit, thus for small-to-medium facial dimensions a distinct RFTP should be considered.

  8. Photo-anthropometric study on face among Garo adult females of Bangladesh.

    PubMed

    Akhter, Z; Banu, M L A; Alam, M M; Hossain, S; Nazneen, M

    2013-08-01

    Facial anthropometry has well-known implications in health-related fields. Measurement of human face is used in identification of person in Forensic medicine, Plastic surgery, Orthodontics, Archeology, Hair-style design and examination of the differences between races and ethnicities. Facial anthropometry provides an indication of the variations in facial shape in a specified population. Bangladesh harbours many cultures and people of different races because of the colonial rules of the past regimes. Standards based on ethnic or racial data are desirable because these standards reflect the potentially different patterns of craniofacial growth resulting from racial, ethnic and sexual differences. In the above context, the present study was attempted to establish ethnic specific anthropometric data for the Christian Garo adult females of Bangladesh. The study was an observational, cross-sectional and primarily descriptive in nature with some analytical components and it was carried out with a total number of 100 Christian Garo adult females aged between 25-45 years. Three vertical facial dimensions such as facial height from 'trichion' to 'gnathion', nasal length and total vermilion height were measured by photographic method. Though these measurements were taken by photographic method but they were converted into actual size using one of the physically measured variables between two angles of the mouth (chilion to chilion). The data were then statistically analyzed by computation to find out its normatic value. The study also observed the possible 'correlation' between the facial height from 'trichion' to 'gnathion' with nasal length and total vermilion height. Multiplication factors were estimated for estimating facial height from nasal length and total vermilion height. Comparison were made between 'estimated' values with the 'measured' values by using't' test. The mean (+/- SD) of nasal length and total vermilion height were 4.53 +/- 0.36 cm and 1.63 +/- 0.23 cm respectively and the mean (+/- SD) of facial height from 'trichion' to 'gnathion' was 16.88 +/- 1.11 cm. Nasal length and total vermilion height showed also a significant positive correlation with facial height from 'trichion' to 'gnathion'. No significant difference was found between the 'measured' and 'estimated' facial height from 'trichion' to 'gnathion' for nasal length and total vermilion height.

  9. Unilateral hemimandibular hyperactivity: Clinical features of a population of 128 patients.

    PubMed

    Vernucci, Roberto Antonio; Mazzoli, Valentina; Galluccio, Gabriella; Silvestri, Alessandro; Barbato, Ersilia

    2018-07-01

    Facial asymmetries due to unilateral condylar hyperactivity are often a challenge both for maxillo-facial surgeons and for orthodontists; the current literature shows different opinions about aetiology, classification, treatment approach and timing. We made a retrospective study on patients suffering from unilateral condylar hyperactivity between 1997 and 2015 in our Department; clinical features and treatment options were grouped and compared with literature. The descriptive analysis investigated variables like sex, age, side and direction of the asymmetry, condylar activity and type of intervention. The population was composed of 128 patients. The hemimandibular hyperactivity occurs equally in both sexes around the second decade, although the range of the first consultation goes from 7 to 49 y.o. The vertical hyperdevelopment group is almost equal to the horizontal. All the patients with horizontal hyperactivity showed negative scintigraphy and were treated with pre-surgical orthodontics and orthognathic surgery; patients with vertical hyperactivity and positive scintigraphy were treated with condylectomy and post-surgical orthodontics. In our group of patients, direction of the hyperactivity and results of the scintigraphy lead to treatment choice and timing. Further studies are necessary to explain why, in our group, all the patients with horizontal involvement are negative to scintigraphy. Copyright © 2018 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. A glasses-type wearable device for monitoring the patterns of food intake and facial activity

    NASA Astrophysics Data System (ADS)

    Chung, Jungman; Chung, Jungmin; Oh, Wonjun; Yoo, Yongkyu; Lee, Won Gu; Bang, Hyunwoo

    2017-01-01

    Here we present a new method for automatic and objective monitoring of ingestive behaviors in comparison with other facial activities through load cells embedded in a pair of glasses, named GlasSense. Typically, activated by subtle contraction and relaxation of a temporalis muscle, there is a cyclic movement of the temporomandibular joint during mastication. However, such muscular signals are, in general, too weak to sense without amplification or an electromyographic analysis. To detect these oscillatory facial signals without any use of obtrusive device, we incorporated a load cell into each hinge which was used as a lever mechanism on both sides of the glasses. Thus, the signal measured at the load cells can detect the force amplified mechanically by the hinge. We demonstrated a proof-of-concept validation of the amplification by differentiating the force signals between the hinge and the temple. A pattern recognition was applied to extract statistical features and classify featured behavioral patterns, such as natural head movement, chewing, talking, and wink. The overall results showed that the average F1 score of the classification was about 94.0% and the accuracy above 89%. We believe this approach will be helpful for designing a non-intrusive and un-obtrusive eyewear-based ingestive behavior monitoring system.

  11. Rapid Stress System Drives Chemical Transfer of Fear from Sender to Receiver

    PubMed Central

    de Groot, Jasper H. B.; Smeets, Monique A. M.; Semin, Gün R.

    2015-01-01

    Humans can register another person’s fear not only with their eyes and ears, but also with their nose. Previous research has demonstrated that exposure to body odors from fearful individuals elicited implicit fear in others. The odor of fearful individuals appears to have a distinctive signature that can be produced relatively rapidly, driven by a physiological mechanism that has remained unexplored in earlier research. The apocrine sweat glands in the armpit that are responsible for chemosignal production contain receptors for adrenalin. We therefore expected that the release of adrenalin through activation of the rapid stress response system (i.e., the sympathetic-adrenal medullary system) is what drives the release of fear sweat, as opposed to activation of the slower stress response system (i.e., hypothalamus-pituitary-adrenal axis). To test this assumption, sweat was sampled while eight participants prepared for a speech. Participants had higher heart rates and produced more armpit sweat in the fast stress condition, compared to baseline and the slow stress condition. Importantly, exposure to sweat from participants in the fast stress condition induced in receivers (N = 31) a simulacrum of the state of the sender, evidenced by the emergence of a fearful facial expression (facial electromyography) and vigilant behavior (i.e., faster classification of emotional facial expressions). PMID:25723720

  12. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  13. Research of Face Recognition with Fisher Linear Discriminant

    NASA Astrophysics Data System (ADS)

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  14. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Lyme disease and Bell's palsy: an epidemiological study of diagnosis and risk in England.

    PubMed

    Cooper, Lilli; Branagan-Harris, Michael; Tuson, Richard; Nduka, Charles

    2017-05-01

    Lyme disease is caused by a tick-borne spirochaete of the Borrelia species. It is associated with facial palsy, is increasingly common in England, and may be misdiagnosed as Bell's palsy. To produce an accurate map of Lyme disease diagnosis in England and to identify patients at risk of developing associated facial nerve palsy, to enable prevention, early diagnosis, and effective treatment. Hospital episode statistics (HES) data in England from the Health and Social Care Information Centre were interrogated from April 2011 to March 2015 for International Classification of Diseases 10th revision (ICD-10) codes A69.2 (Lyme disease) and G51.0 (Bell's palsy) in isolation, and as a combination. Patients' age, sex, postcode, month of diagnosis, and socioeconomic groups as defined according to the English Indices of Deprivation (2004) were also collected. Lyme disease hospital diagnosis increased by 42% per year from 2011 to 2015 in England. Higher incidence areas, largely rural, were mapped. A trend towards socioeconomic privilege and the months of July to September was observed. Facial palsy in combination with Lyme disease is also increasing, particularly in younger patients, with a mean age of 41.7 years, compared with 59.6 years for Bell's palsy and 45.9 years for Lyme disease ( P = 0.05, analysis of variance [ANOVA]). Healthcare practitioners should have a high index of suspicion for Lyme disease following travel in the areas shown, particularly in the summer months. The authors suggest that patients presenting with facial palsy should be tested for Lyme disease. © British Journal of General Practice 2017.

  16. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  17. New vascular classification of port-wine stains: improving prediction of Sturge-Weber risk.

    PubMed

    Waelchli, R; Aylett, S E; Robinson, K; Chong, W K; Martinez, A E; Kinsler, V A

    2014-10-01

    Facial port-wine stains (PWSs) are usually isolated findings; however, when associated with cerebral and ocular vascular malformations they form part of the classical triad of Sturge-Weber syndrome (SWS). To evaluate the associations between the phenotype of facial PWS and the diagnosis of SWS in a cohort with a high rate of SWS. Records were reviewed of all 192 children with a facial PWS seen in 2011-13. Adverse outcome measures were clinical (seizures, abnormal neurodevelopment, glaucoma) and radiological [abnormal magnetic resonance imaging (MRI)], modelled by multivariate logistic regression. The best predictor of adverse outcomes was a PWS involving any part of the forehead, delineated at its inferior border by a line joining the outer canthus of the eye to the top of the ear, and including the upper eyelid. This involves all three divisions of the trigeminal nerve, but corresponds well to the embryonic vascular development of the face. Bilateral distribution was not an independently significant phenotypic feature. Abnormal MRI was a better predictor of all clinical adverse outcome measures than PWS distribution; however, for practical reasons guidelines based on clinical phenotype are proposed. Facial PWS distribution appears to follow the embryonic vasculature of the face, rather than the trigeminal nerve. We propose that children with a PWS on any part of the 'forehead' should have an urgent ophthalmology review and a brain MRI. A prospective study has been established to test the validity of these guidelines. © The Authors. British Journal of Dermatology published by John Wiley & Sons Ltd on behalf of British Association of Dermatologists.

  18. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  19. Rapid processing of emotional expressions without conscious awareness.

    PubMed

    Smith, Marie L

    2012-08-01

    Rapid accurate categorization of the emotional state of our peers is of critical importance and as such many have proposed that facial expressions of emotion can be processed without conscious awareness. Typically, studies focus selectively on fearful expressions due to their evolutionary significance, leaving the subliminal processing of other facial expressions largely unexplored. Here, I investigated the time course of processing of 3 facial expressions (fearful, disgusted, and happy) plus an emotionally neutral face, during objectively unaware and aware perception. Participants completed the challenging "which expression?" task in response to briefly presented backward-masked expressive faces. Although participant's behavioral responses did not differentiate between the emotional content of the stimuli in the unaware condition, activity over frontal and occipitotemporal (OT) brain regions indicated an emotional modulation of the neuronal response. Over frontal regions this was driven by negative facial expressions and was present on all emotional trials independent of later categorization. Whereas the N170 component, recorded on lateral OT electrodes, was enhanced for all facial expressions but only on trials that would later be categorized as emotional. The results indicate that emotional faces, not only fearful, are processed without conscious awareness at an early stage and highlight the critical importance of considering categorization response when studying subliminal perception.

  20. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  1. Face-body integration of intense emotional expressions of victory and defeat.

    PubMed

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory.

  2. Face-body integration of intense emotional expressions of victory and defeat

    PubMed Central

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory. PMID:28245245

  3. Neural measures of the role of affective prosody in empathy for pain.

    PubMed

    Meconi, Federica; Doro, Mattia; Lomoriello, Arianna Schiano; Mastrella, Giulia; Sessa, Paola

    2018-01-10

    Emotional communication often needs the integration of affective prosodic and semantic components from speech and the speaker's facial expression. Affective prosody may have a special role by virtue of its dual-nature; pre-verbal on one side and accompanying semantic content on the other. This consideration led us to hypothesize that it could act transversely, encompassing a wide temporal window involving the processing of facial expressions and semantic content expressed by the speaker. This would allow powerful communication in contexts of potential urgency such as witnessing the speaker's physical pain. Seventeen participants were shown with faces preceded by verbal reports of pain. Facial expressions, intelligibility of the semantic content of the report (i.e., participants' mother tongue vs. fictional language) and the affective prosody of the report (neutral vs. painful) were manipulated. We monitored event-related potentials (ERPs) time-locked to the onset of the faces as a function of semantic content intelligibility and affective prosody of the verbal reports. We found that affective prosody may interact with facial expressions and semantic content in two successive temporal windows, supporting its role as a transverse communication cue.

  4. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  5. Selecting reusable components using algebraic specifications

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.

    1992-01-01

    A significant hurdle confronts the software reuser attempting to select candidate components from a software repository - discriminating between those components without resorting to inspection of the implementation(s). We outline a mixed classification/axiomatic approach to this problem based upon our lattice-based faceted classification technique and Guttag and Horning's algebraic specification techniques. This approach selects candidates by natural language-derived classification, by their interfaces, using signatures, and by their behavior, using axioms. We briefly outline our problem domain and related work. Lattice-based faceted classifications are described; the reader is referred to surveys of the extensive literature for algebraic specification techniques. Behavioral support for reuse queries is presented, followed by the conclusions.

  6. Emotion identification and aging: Behavioral and neural age-related changes.

    PubMed

    Gonçalves, Ana R; Fernandes, Carina; Pasion, Rita; Ferreira-Santos, Fernando; Barbosa, Fernando; Marques-Teixeira, João

    2018-05-01

    Aging is known to alter the processing of facial expressions of emotion (FEE), however the impact of this alteration is less clear. Additionally, there is little information about the temporal dynamics of the neural processing of facial affect. We examined behavioral and neural age-related changes in the identification of FEE using event-related potentials. Furthermore, we analyze the relationship between behavioral/neural responses and neuropsychological functioning. To this purpose, 30 younger adults, 29 middle-aged adults and 26 older adults identified FEE. The behavioral results showed a similar performance between groups. The neural results showed no significant differences between groups for the P100 component and an increased N170 amplitude in the older group. Furthermore, a pattern of asymmetric activation was evident in the N170 component. Results also suggest deficits in facial feature decoding abilities, reflected by a reduced N250 amplitude in older adults. Neuropsychological functioning predicts P100 modulation, but does not seem to influence emotion identification ability. The findings suggest the existence of a compensatory function that would explain the age-equivalent performance in emotion identification. The study may help future research addressing behavioral and neural processes involved on processing of FEE in neurodegenerative conditions. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  7. Restoration of Trigeminal Cutaneous Sensation with Cross-Face Sural Nerve Grafts: A Novel Approach to Facial Sensory Rehabilitation.

    PubMed

    Catapano, Joseph; Scholl, David; Ho, Emily; Zuker, Ronald M; Borschel, Gregory H

    2015-09-01

    Although treating facial palsy is considered debilitating for patients, trigeminal nerve palsy and sensory deficits of the face are overlooked components of disability. Complete anesthesia leaves patients susceptible to occult injury, and facial sensation is an important component of interaction and activities of daily living. Sensory reconstruction is well established in the restoration of hand sensation; however, only one previous report proposed a surgical strategy for sensory nerve reconstruction of the face with use of nerve transfers. Nerve transfers, when used alone, have limited application because of their restricted arc of rotation in the face; extending their arc by adding nerve grafts greatly expands their utility. The following cases demonstrate the early results after V2 and V3 reconstruction with cross-face nerve grafts in three patients with acquired trigeminal nerve palsy. Cross-face nerve grafts using the sural nerve permit more proximal reconstruction of the infraorbital and mental nerves, which allows reinnervation of their entire cutaneous distribution. All patients demonstrated improved sensation in the reconstructed dermatomes, and no patients reported donor-site abnormalities. Cross-face nerve grafts result in minimal donor-site morbidity and are promising as a surgical strategy to address sensory deficits of the face. Therapeutic, V.

  8. Non-ablative radiofrequency associated or not with low-level laser therapy on the treatment of facial wrinkles in adult women: A randomized single-blind clinical trial.

    PubMed

    Pereira, Thalita Rodrigues Christovam; Vassão, Patrícia Gabrielli; Venancio, Michele Garcia; Renno, Ana Cláudia Muniz; Aveiro, Mariana Chaves

    2017-06-01

    The objective of this study was to evaluate the effects of Non-ablative Radiofrequency (RF) associated or not with low-level laser therapy (LLLT) on aspect of facial wrinkles among adult women. Forty-six participants were randomized into three groups: Control Group (CG, n = 15), RF Group (RG, n = 16), and RF and LLLT Group (RLG, n = 15). Every participant was evaluated on baseline (T0), after eight weeks (T8) and eight weeks after the completion of treatment (follow-up). They were photographed in order to classify nasolabial folds and periorbital wrinkles (Modified Fitzpatrick Wrinkle Scale and Fitzpatrick Wrinkle Classification System, respectively) and improvement on appearance (Global Aesthetic Improvement Scale). Photograph analyses were performed by 3 blinded evaluators. Classification of nasolabial and periorbital wrinkles did not show any significant difference between groups. Aesthetic appearance indicated a significant improvement for nasolabial folds on the right side of face immediately after treatment (p = 0.018) and follow-up (p = 0.029) comparison. RG presented better results than CG on T8 (p = 0.041, ES = -0.49) and on follow-up (p = 0.041, ES = -0.49) and better than RLG on T8 (p = 0.041, ES = -0.49). RLG presented better results than CG on follow-up (p = 0.007, ES = -0.37). Nasolabial folds and periorbital wrinkles did not change throughout the study; however, some aesthetic improvement was observed. LLLT did not potentiate RF treatment.

  9. Tuning the developing brain to social signals of emotions

    PubMed Central

    Leppänen, Jukka M.; Nelson, Charles A.

    2010-01-01

    PREFACE Humans in diverse cultures develop a similar capacity to recognize the emotional signals of different facial expressions. This capacity is mediated by a brain network that involves emotion-related brain circuits and higher-level visual representation areas. Recent studies suggest that the key components of this network begin to emerge early in life. The studies also suggest that initial biases in emotion-related brain circuits and the early coupling of these circuits and cortical perceptual areas provides a foundation for a rapid acquisition of representations of those facial features that denote specific emotions. PMID:19050711

  10. Classifying four-category visual objects using multiple ERP components in single-trial ERP.

    PubMed

    Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin

    2016-08-01

    Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.

  11. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  12. The facial massage reduced anxiety and negative mood status, and increased sympathetic nervous activity.

    PubMed

    Hatayama, Tomoko; Kitamura, Shingo; Tamura, Chihiro; Nagano, Mayumi; Ohnuki, Koichiro

    2008-12-01

    The aim of this study was to clarify the effects of 45 min of facial massage on the activity of autonomic nervous system, anxiety and mood in 32 healthy women. Autonomic nervous activity was assessed by heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, we evaluated the high-frequency components (HF) and the low- to high-frequency ratio (LF/HF ratio), reflecting parasympathetic nervous activity and sympathetic nervous activity, respectively. The State Trait Anxiety Inventory (STAI) and the Profile of Mood Status (POMS) were administered to evaluate psychological status. The score of STAI and negative scale of POMS were significantly reduced following the massage, and only the LF/HF ratio was significantly enhanced after the massage. It was concluded that the facial massage might refresh the subjects by reducing their psychological distress and activating the sympathetic nervous system.

  13. Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.

    PubMed

    Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu

    2015-01-01

    Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.

  14. The eye of the begetter: predicting infant attachment disorganization from women's prenatal interpretations of infant facial expressions.

    PubMed

    Bernstein, Rosemary E; Tenedios, Catherine M; Laurent, Heidemarie K; Measelle, Jeffery R; Ablow, Jennifer C

    2014-01-01

    Infant-caregiver attachment disorganization has been linked to many long-term negative psychosocial outcomes. While various prevention programs appear to be effective in preventing disorganized attachment, methods currently used to identify those at risk are unfortunately either overly general or impractical. The current investigation tested whether women's prenatal biases in identifying infant expressions of emotion--tendencies previously shown to relate to some of the maternal variables associated with infant attachment, including maternal traumatization, trauma symptoms, and maternal sensitivity--could predict infant attachment classification at 18 months postpartum. Logistic regression analyses revealed that together with women's adult history of high betrayal traumatization, response concordance with a normative reference sample in labeling infant expressions as negatively valenced, and the number of infant facial expressions that participants classified as "sad" and "angry" predicted subsequent infant attachment security versus disorganization. Implications for screening and prevention are discussed. © 2014 Michigan Association for Infant Mental Health.

  15. Classification of independent components of EEG into multiple artifact classes.

    PubMed

    Frølich, Laura; Andersen, Tobias S; Mørup, Morten

    2015-01-01

    In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.

  16. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Emotion recognition based on physiological changes in music listening.

    PubMed

    Kim, Jonghwa; André, Elisabeth

    2008-12-01

    Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.

  18. Influence of spatial frequency and emotion expression on face processing in patients with panic disorder.

    PubMed

    Shim, Miseon; Kim, Do-Won; Yoon, Sunkyung; Park, Gewnhi; Im, Chang-Hwan; Lee, Seung-Hwan

    2016-06-01

    Deficits in facial emotion processing is a major characteristic of patients with panic disorder. It is known that visual stimuli with different spatial frequencies take distinct neural pathways. This study investigated facial emotion processing involving stimuli presented at broad, high, and low spatial frequencies in patients with panic disorder. Eighteen patients with panic disorder and 19 healthy controls were recruited. Seven event-related potential (ERP) components: (P100, N170, early posterior negativity (EPN); vertex positive potential (VPP), N250, P300; and late positive potential (LPP)) were evaluated while the participants looked at fearful and neutral facial stimuli presented at three spatial frequencies. When a fearful face was presented, panic disorder patients showed a significantly increased P100 amplitude in response to low spatial frequency compared to high spatial frequency; whereas healthy controls demonstrated significant broad spatial frequency dependent processing in P100 amplitude. Vertex positive potential amplitude was significantly increased in high and broad spatial frequency, compared to low spatial frequency in panic disorder. Early posterior negativity amplitude was significantly different between HSF and BSF, and between LSF and BSF processing in both groups, regardless of facial expression. The possibly confounding effects of medication could not be controlled. During early visual processing, patients with panic disorder prefer global to detailed information. However, in later processing, panic disorder patients overuse detailed information for the perception of facial expressions. These findings suggest that unique spatial frequency-dependent facial processing could shed light on the neural pathology associated with panic disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Dissociation of Neural Substrates of Response Inhibition to Negative Information between Implicit and Explicit Facial Go/Nogo Tasks: Evidence from an Electrophysiological Study

    PubMed Central

    Sun, Shiyue; Carretié, Luis; Zhang, Lei; Dong, Yi; Zhu, Chunyan; Luo, Yuejia; Wang, Kai

    2014-01-01

    Background Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition. Methods We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks. Results We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks. Conclusions The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation. PMID:25330212

  20. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    PubMed

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Brain potentials indicate the effect of other observers' emotions on perceptions of facial attractiveness.

    PubMed

    Huang, Yujing; Pan, Xuwei; Mo, Yan; Ma, Qingguo

    2016-03-23

    Perceptions of facial attractiveness are sensitive to emotional expression of the perceived face. However, little is known about whether the emotional expression on the face of another observer of the perceived face may have an effect on perceptions of facial attractiveness. The present study used event-related potential technique to examine social influence of the emotional expression on the face of another observer of the perceived face on perceptions of facial attractiveness. The experiment consisted of two phases. In the first phase, a neutral target face was paired with two images of individuals gazing at the target face with smiling, fearful or neutral expressions. In the second phase, participants were asked to judge the attractiveness of the target face. We found that a target face was more attractive when other observers positively gazing at the target face in contrast to the condition when other observers were negative. Additionally, the results of brain potentials showed that the visual positive component P3 with peak latency from 270 to 330 ms was larger after participants observed the target face paired with smiling individuals than the target face paired with neutral individuals. These findings suggested that facial attractiveness of an individual may be influenced by the emotional expression on the face of another observer of the perceived face. Copyright © 2016. Published by Elsevier Ireland Ltd.

  2. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions

    PubMed Central

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression. PMID:25206321

  3. An automatic method for skeletal patterns classification using craniomaxillary variables on a Colombian population.

    PubMed

    Niño-Sandoval, Tania Camila; Guevara Perez, Sonia V; González, Fabio A; Jaque, Robinson Andrés; Infante-Contreras, Clementina

    2016-04-01

    The mandibular bone is an important part of the forensic facial reconstruction and it has the possibility of getting lost in skeletonized remains; for this reason, it is necessary to facilitate the identification process simulating the mandibular position only through craniomaxillary measures, for this task, different modeling techniques have been performed, but they only contemplate a straight facial profile that belong to skeletal pattern Class I, but the 24.5% corresponding to the Colombian skeletal patterns Class II and III are not taking into account, besides, craniofacial measures do not follow a parametric trend or a normal distribution. The aim of this study was to employ an automatic non-parametric method as the Support Vector Machines to classify skeletal patterns through craniomaxillary variables, in order to simulate the natural mandibular position on a contemporary Colombian sample. Lateral cephalograms (229) of Colombian young adults of both sexes were collected. Landmark coordinates protocols were used to create craniomaxillary variables. A Support Vector Machine with a linear kernel classifier model was trained on a subset of the available data and evaluated over the remaining samples. The weights of the model were used to select the 10 best variables for classification accuracy. An accuracy of 74.51% was obtained, defined by Pr-A-N, N-Pr-A, A-N-Pr, A-Te-Pr, A-Pr-Rhi, Rhi-A-Pr, Pr-A-Te, Te-Pr-A, Zm-A-Pr and PNS-A-Pr angles. The Class Precision and the Class Recall showed a correct distinction of the Class II from the Class III and vice versa. Support Vector Machines created an important model of classification of skeletal patterns using craniomaxillary variables that are not commonly used in the literature and could be applicable to the 24.5% of the contemporary Colombian sample. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    PubMed

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression.

  5. Lyme disease and Bell’s palsy: an epidemiological study of diagnosis and risk in England

    PubMed Central

    Cooper, Lilli; Branagan-Harris, Michael; Tuson, Richard; Nduka, Charles

    2017-01-01

    Background Lyme disease is caused by a tick-borne spirochaete of the Borrelia species. It is associated with facial palsy, is increasingly common in England, and may be misdiagnosed as Bell’s palsy. Aim To produce an accurate map of Lyme disease diagnosis in England and to identify patients at risk of developing associated facial nerve palsy, to enable prevention, early diagnosis, and effective treatment. Design and setting Hospital episode statistics (HES) data in England from the Health and Social Care Information Centre were interrogated from April 2011 to March 2015 for International Classification of Diseases 10th revision (ICD-10) codes A69.2 (Lyme disease) and G51.0 (Bell’s palsy) in isolation, and as a combination. Method Patients’ age, sex, postcode, month of diagnosis, and socioeconomic groups as defined according to the English Indices of Deprivation (2004) were also collected. Results Lyme disease hospital diagnosis increased by 42% per year from 2011 to 2015 in England. Higher incidence areas, largely rural, were mapped. A trend towards socioeconomic privilege and the months of July to September was observed. Facial palsy in combination with Lyme disease is also increasing, particularly in younger patients, with a mean age of 41.7 years, compared with 59.6 years for Bell’s palsy and 45.9 years for Lyme disease (P = 0.05, analysis of variance [ANOVA]). Conclusion Healthcare practitioners should have a high index of suspicion for Lyme disease following travel in the areas shown, particularly in the summer months. The authors suggest that patients presenting with facial palsy should be tested for Lyme disease. PMID:28396367

  6. Risk Factors for Clinician-Diagnosed Lyme Arthritis, Facial Palsy, Carditis, and Meningitis in Patients From High-Incidence States

    PubMed Central

    Kwit, Natalie A; Max, Ryan; Mead, Paul S

    2018-01-01

    Abstract Background Clinical features of Lyme disease (LD) range from localized skin lesions to serious disseminated disease. Information on risk factors for Lyme arthritis, facial palsy, carditis, and meningitis is limited but could facilitate disease recognition and elucidate pathophysiology. Methods Patients from high-incidence states treated for LD during 2005–2014 were identified in a nationwide insurance claims database using the International Classification of Diseases, Ninth Revision code for LD (088.81), antibiotic treatment history, and clinically compatible codiagnosis codes for LD manifestations. Results Among 88022 unique patients diagnosed with LD, 5122 (5.8%) patients with 5333 codiagnoses were identified: 2440 (2.8%) arthritis, 1853 (2.1%) facial palsy, 534 (0.6%) carditis, and 506 (0.6%) meningitis. Patients with disseminated LD had lower median age (35 vs 42 years) and higher male proportion (61% vs 50%) than nondisseminated LD. Greatest differential risks included arthritis in males aged 10–14 years (odds ratio [OR], 3.5; 95% confidence interval [CI], 3.0–4.2), facial palsy (OR, 2.1; 95% CI, 1.6–2.7) and carditis (OR, 2.4; 95% CI, 1.6–3.6) in males aged 20–24 years, and meningitis in females aged 10–14 years (OR, 3.4; 95% CI, 2.1–5.5) compared to the 55–59 year referent age group. Males aged 15–29 years had the highest risk for complete heart block, a potentially fatal condition. Conclusions The risk and manifestations of disseminated LD vary by age and sex. Provider education regarding at-risk populations and additional investigations into pathophysiology could enhance early case recognition and improve patient management. PMID:29326960

  7. Intact emotion recognition and experience but dysfunctional emotion regulation in idiopathic Parkinson's disease.

    PubMed

    Ille, Rottraut; Wabnegger, Albert; Schwingenschuh, Petra; Katschnig-Winter, Petra; Kögl-Wallner, Mariella; Wenzel, Karoline; Schienle, Anne

    2016-02-15

    A specific non-motor impairment in Parkinson's disease (PD) concerns difficulties to accurately identify facial emotions. Findings are numerous but very inconsistent, ranging from general discrimination deficits to problems for specific emotions up to no impairment at all. By contrast, only a few studies exist about emotion experience, altered affective traits and states in PD. To investigate the decoding capacity for affective facial expressions, affective experience of emotion-eliciting images and affective personality traits in PD. The study sample included 25 patients with mild to moderate symptom intensity and 25 healthy controls (HC) of both sexes. The participants were shown pictures of facial expressions depicting disgust, fear, and anger as well as disgusting and fear-relevant scenes. Additionally, they answered self-report scales for the assessment of affective traits. PD patients had more problems in controlling anger and disgust feelings than HC. Higher disgust sensitivity in PD was associated with lower functioning in everyday life and lower capacity to recognize angry faces. Furthermore, patients reported less disgust towards poor hygiene and spoiled food and they stated elevated anxiety. However, the clinical group displayed intact facial emotion decoding and emotion experience. Everyday life functionality was lowered in PD and decreased with stronger motor impairment. Furthermore, disease duration was negatively associated to correct classification of angry faces. Our data indicate that problems with emotion regulation may appear already in earlier disease stages of PD. By contrast, PD patients showed appropriate emotion recognition and experience. However, data also point to a deterioration of emotion recognition capacity with the course of the disease. Compensatory mechanisms in PD patients with less advanced disease are discussed. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.

    PubMed

    Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein

    2017-12-13

    As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.

  9. Quantitative analysis of fetal facial morphology using 3D ultrasound and statistical shape modeling: a feasibility study.

    PubMed

    Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C

    2017-07-01

    The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4  weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A Patient-Assessed Morbidity to Evaluate Outcome in Surgically Treated Vestibular Schwannomas.

    PubMed

    Al-Shudifat, Abdul Rahman; Kahlon, Babar; Höglund, Peter; Lindberg, Sven; Magnusson, Måns; Siesjo, Peter

    2016-10-01

    Outcome after treatment of vestibular schwannomas can be evaluated by health providers as mortality, recurrence, performance, and morbidity. Because mortality and recurrence are rare events, evaluation has to focus on performance and morbidity. The latter has mostly been reported by health providers. In the present study, we validate 2 new scales for patient-assessed performance and morbidity in comparison with different outcome tools, such as quality of life (QOL) (European Quality of Life-5 dimensions [EQ-5D]), facial nerve score, and work capacity. There were 167 total patients in a retrospective (n = 90) and prospective (n = 50) cohort of surgically treated vestibular schwannomas. A new patient-assessed morbidity score (paMS), a patient-assessed Karnofsky score (paKPS), the patient-assessed QOL (EQ-5D) score, work capacity, and the House-Brackmann facial nerve score were used as outcome measures. Analysis of paMS components and their relation to other outcomes was done as uni- and multivariate analysis. All outcome instruments, except EQ-5D and paKPS, showed a significant decrease postoperatively. Only the facial nerve score (House-Brackmann facial nerve score) differed significantly between the retrospective and prospective cohorts. Out of the 16 components of the paMS, hearing dysfunction, tear dysfunction, balance dysfunction, and eye irritation were most often reported. Both paMS and EQ-5D correlated significantly with work capacity. Standard QOL and performance instruments may not be sufficiently sensitive or specific to measure outcome at the cohort level after surgical treatment of vestibular schwannomas. A morbidity score may yield more detailed information on symptoms that can be relevant for rehabilitation and occupational training after surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Coding and quantification of a facial expression for pain in lambs.

    PubMed

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Consensus or controversy? The classification and treatment decision-making by 491 maxillofacial surgeons from around the world in three cases of a unilateral mandibular condyle fracture.

    PubMed

    Kommers, Sofie C; Boffano, Paolo; Forouzanfar, Tymour

    2015-12-01

    Many studies are available in the literature on both classification and treatment of unilateral mandibular condyle fractures. To date however, controversy regarding the best treatment for unilateral mandibular condyle fractures remains. In this study, an attempt was made to quantify the level of agreement between a sample of maxillofacial surgeons worldwide, on the classification and treatment decisions in three different unilateral mandibular condyle fracture cases. In total, 491 of 3044 participants responded. In all three mandibular condyle fracture cases, a fairly high level of disagreement was found. Only in the case of a subcondylar fracture, assuming dysocclusion was present, more than 81% of surgeons agreed that the best treatment would be open reduction and internal fixation. Based on the study results, there is considerable variation among surgeons worldwide with regard to treatment of unilateral mandibular condyle fracture. 3D imaging in higher fractures tends to lead to more invasive treatment decisions. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. [Facial dog bite injuries in children: retrospective study of 77 cases].

    PubMed

    Hersant, B; Cassier, S; Constantinescu, G; Gavelle, P; Vazquez, M-P; Picard, A; Kadlub, N

    2012-06-01

    The face is the area most vulnerable for dog bites in children. Surgical management is an emergency to prevent infection, functional and aesthetic outcomes. The aim of this study was to define a new gravity scale, and to determine a prevention policy. In our maxillofacial and plastic surgery department, we conducted a retrospective study from 2002 to 2010, including 77 children under 16 years old, victims of facial dog bite. We analyzed epidemiological, clinical data, surgical outcomes. The mean age was 5.36 years. Dogs were principally represented by class I and II dogs; 27,7% of them had ever bitten before. In almost all the cases, the dogs belong to the family or closers. Twenty-one percent of children belong to an unfavourable social environment; 71.43% of dog bites interested the central area of the face. The bites were deep in 77% of cases with amputation or extensive loss of substance in 31% of cases. The healing time was 10.54 months. Nearly a third of patients required several surgeries; 41.56% of patients had aesthetic and functional sequelae; 35.1% of children had psychological problems afterward. Facial children dog bites require a multidisciplinary approach, and a long-term follow-up. We propose a new classification of dog bite severity, more appropriate to the face. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  14. Description and recognition of faces from 3D data

    NASA Astrophysics Data System (ADS)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  15. Clinical patterns and epidemiological characteristics of facial melasma in Brazilian women.

    PubMed

    Tamega, A de A; Miot, L D B; Bonfietti, C; Gige, T C; Marques, M E A; Miot, H A

    2013-02-01

    BACKGROUND; Melasma is a common acquired chronic hypermelanosis of sun-exposed areas which significantly impacts quality of life. There are few epidemiological studies in medical literature concerning these patients. Characterize clinical and epidemiological data on Brazilian female patients with melasma. A semi-structured questionnaire was administered to melasma patients treated at a dermatology clinic between 2005 and 2010. Association between variables was performed by multivariate regression models. We assessed 302 patients; intermediate skin phototypes III (34.4%) and IV (38.4%) were prevalent. Mean disease onset age was 27.5 ± 7.8 years and familiar occurrence of melasma was identified in 56.3%. The most commonly reported trigger factors were pregnancy (36.4%), contraceptive pills (16.2%) and intense sun exposure (27.2%). Preferred facial topographies were zygomatic (83.8%), labial superior (51.3%) and frontal (49.7%). Pregnancy induced melasma has been associated to early disease (OR = 0.86) and number of pregnancies (OR = 1.39). Childbearing was correlated to melasma extension. Older disease onset age was associated to darker skin phototypes. Co-occurrence of facial topographies supported clinical classification as centrofacial and peripheral melasma. This population was characterized by: a high prevalence in adult females, intermediate skin phototypes, disease precipitation by hormonal stimulus and familiar genetic influence. © 2012 The Authors. Journal of the European Academy of Dermatology and Venereology © 2012 European Academy of Dermatology and Venereology.

  16. [Study on biopharmaceutics classification system for Chinese materia medica of extract of Huanglian].

    PubMed

    Liu, Yang; Yin, Xiu-Wen; Wang, Zi-Yu; Li, Xue-Lian; Pan, Meng; Li, Yan-Ping; Dong, Ling

    2017-11-01

    One of the advantages of biopharmaceutics classification system of Chinese materia medica (CMMBCS) is expanding the classification research level from single ingredient to multi-components of Chinese herb, and from multi-components research to holistic research of the Chinese materia medica. In present paper, the alkaloids of extract of huanglian were chosen as the main research object to explore their change rules in solubility and intestinal permeability of single-component and multi-components, and to determine the biopharmaceutical classification of extract of Huanglian from holistic level. The typical shake-flask method and HPLC were used to detect the solubility of single ingredient of alkaloids from extract of huanglian. The quantitative research of alkaloids in intestinal absorption was measured in single-pass intestinal perfusion experiment while permeability coefficient of extract of huanglian was calculated by self-defined weight coefficient method. Copyright© by the Chinese Pharmaceutical Association.

  17. Lethal acrodysgenital dwarfism: a severe lethal condition resembling Smith-Lemli-Opitz syndrome.

    PubMed Central

    Merrer, M L; Briard, M L; Girard, S; Mulliez, N; Moraine, C; Imbert, M C

    1988-01-01

    We report eight cases of a lethal association of failure to thrive, facial dysmorphism, ambiguous genitalia, syndactyly, postaxial polydactyly, and internal developmental anomalies (Hirschsprung's disease, cardiac and renal malformation). This syndrome is likely to be autosomal recessive and resembles Smith-Lemli-Opitz (SLO) syndrome. However, the lethality, the common occurrence of polydactyly, and the sexual ambiguity distinguishes this condition from SLO syndrome. A review of published reports supports the separate classification of this syndrome for which we propose the name lethal acrodysgenital dwarfism. Images PMID:2831368

  18. Dysmorphism of the middle ear: case report

    PubMed Central

    Solero, P; Ferrara, M; Musto, R; Pira, A; Di Lisi, D

    2005-01-01

    Summary Although there are numerous publications in the literature describing the wide range of diagnosis, classifications and treatment of malformations of the hearing apparatus, even more variations can be found in clinical practice. Indeed, each individual case is unique as far as concerns pathogenesis, clinical course and treatment. The case reported herein describes a 12-year-old boy affected by cranio-facial dysmorphism and monolateral conductive hearing loss in the right ear: followed from radiological diagnosis – carried out to study a malformation of the ear pinna – to surgical treatment. PMID:16602328

  19. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  20. Air classification: Potential treatment method for optimized recycling or utilization of fine-grained air pollution control residues obtained from dry off-gas cleaning high-temperature processing systems.

    PubMed

    Lanzerstorfer, Christof

    2015-11-01

    In the dust collected from the off-gas of high-temperature processes, usually components that are volatile at the process temperature are enriched. In the recycling of the dust, the concentration of these volatile components is frequently limited to avoid operation problems. Also, for external utilization the concentration of such volatile components, especially heavy metals, is often restricted. The concentration of the volatile components is usually higher in the fine fractions of the collected dust. Therefore, air classification is a potential treatment method to deplete the coarse material from these volatile components by splitting off a fines fraction with an increased concentration of those volatile components. In this work, the procedure of a sequential classification using a laboratory air classifier and the calculations required for the evaluation of air classification for a certain application were demonstrated by taking the example of a fly ash sample from a biomass combustion plant. In the investigated example, the Pb content in the coarse fraction could be reduced to 60% by separation of 20% fines. For the non-volatile Mg the content was almost constant. It can be concluded that air classification is an appropriate method for the treatment of off-gas cleaning residues. © The Author(s) 2015.

  1. Classification of nasolabial folds in Asians and the corresponding surgical approaches: By Shanghai 9th People's Hospital.

    PubMed

    Zhang, Lu; Tang, Meng-Yao; Jin, Rong; Zhang, Ying; Shi, Yao-Ming; Sun, Bao-Shan; Zhang, Yu-Guang

    2015-07-01

    One of the earliest signs of aging appears in the nasolabial fold, which is a special anatomical region that requires many factors for comprehensive assessment. Hence, it is inadequate to rely on a single index to facilitate the classification of nasolabial folds. Through clinical observation, we have observed that traditional filling treatments provide little improvement for some patients, which prompted us to seek a more specific and scientific classification standard and assessment system. A total of 900 patients who sought facial rejuvenation treatment in Shanghai 9th People's Hospital were invited in this study. We observed the different nasolabial fold traits for different age groups and in different states, and the results were compared with the Wrinkle Severity Rating Scale (WSRS). We summarized the data, presented a classification scheme, and proposed a selection of treatment options. Consideration of the anatomical and histological features of nasolabial folds allowed us to divide nasolabial folds into five types, namely the skin type, fat pad type, muscular type, bone retrusion type, and hybrid type. Because different types of nasolabial folds require different treatments, it is crucial to accurately assess and correctly classify the conditions. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. Independent components analysis to increase efficiency of discriminant analysis methods (FDA and LDA): Application to NMR fingerprinting of wine.

    PubMed

    Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N

    2015-08-15

    Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The face and its emotion: right N170 deficits in structural processing and early emotional discrimination in schizophrenic patients and relatives.

    PubMed

    Ibáñez, Agustín; Riveros, Rodrigo; Hurtado, Esteban; Gleichgerrcht, Ezequiel; Urquina, Hugo; Herrera, Eduar; Amoruso, Lucía; Reyes, Migdyrai Martin; Manes, Facundo

    2012-01-30

    Previous studies have reported facial emotion recognition impairments in schizophrenic patients, as well as abnormalities in the N170 component of the event-related potential. Current research on schizophrenia highlights the importance of complexly-inherited brain-based deficits. In order to examine the N170 markers of face structural and emotional processing, DSM-IV diagnosed schizophrenia probands (n=13), unaffected first-degree relatives from multiplex families (n=13), and control subjects (n=13) matched by age, gender and educational level, performed a categorization task which involved words and faces with positive and negative valence. The N170 component, while present in relatives and control subjects, was reduced in patients, not only for faces, but also for face-word differences, suggesting a deficit in structural processing of stimuli. Control subjects showed N170 modulation according to the valence of facial stimuli. However, this discrimination effect was found to be reduced both in patients and relatives. This is the first report showing N170 valence deficits in relatives. Our results suggest a generalized deficit affecting the structural encoding of faces in patients, as well as the emotion discrimination both in patients and relatives. Finally, these findings lend support to the notion that cortical markers of facial discrimination can be validly considered as vulnerability markers. © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Chimeric anterolateral thigh free flap for reconstruction of complex cranio-orbito-facial defects after skull base cancers resection.

    PubMed

    Cherubino, Mario; Turri-Zanoni, Mario; Battaglia, Paolo; Giudice, Marco; Pellegatta, Igor; Tamborini, Federico; Maggiulli, Francesca; Guzzetti, Luca; Di Giovanna, Danilo; Bignami, Maurizio; Calati, Carolina; Castelnuovo, Paolo; Valdatta, Luigi

    2017-01-01

    Complex cranio-orbito-facial defects after skull base cancers resection entail a functional and esthetic reconstruction. The introduction of endoscopic assisted techniques for excision surgery with the advances in reconstructive surgery and anesthesiology allowed to improve the management of such critical patients. We report a series of chimeric anterolateral thigh (ALT) flaps used to reconstruct complex cranio-orbital-facial defects after skull base surgery. A retrospective review of patients that underwent cranio-orbito-facial reconstruction using a chimeric ALT flap from March 2013 to October 2015 at a single tertiary care referral Institute was performed. All patients were affected by locally-advanced malignant tumor and the resulting defects involved the skull base in all cases. The ALT flaps were perforator-based flaps with different components: fascia, skin and muscle. The different flap territories had independent vascular supply and were independent of any physical interconnection except where linked by a common source vessel. Ten patients were included in the study. Three patients underwent adjuvant radiotherapy and to chemotherapy. The mean hospitalization time was 21 days (range, 8-24 days). One failure was observed. After a mean follow-up of 12.4 months, 3 patients died of the disease, 2 are alive with disease, while 5 patients (50%) are currently alive without evidence of disease. Chimeric ALT flap is a reliable and versatile reconstructive option for complex cranio-orbito-facial defects resulting from skull base surgery. The chimeric flap composed of different territories proved to be adequate for a patient-tailored three-dimensional reconstruction of the defects as well as able to resist to the postoperative adjuvant treatments. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  5. Association study of Demodex bacteria and facial dermatoses based on DGGE technique.

    PubMed

    Zhao, YaE; Yang, Fan; Wang, RuiLing; Niu, DongLing; Mu, Xin; Yang, Rui; Hu, Li

    2017-03-01

    The role of bacteria is unclear in the facial skin lesions caused by Demodex. To shed some light on this issue, we conducted a case-control study comparing cases with facial dermatoses with controls with healthy skin using denaturing gradient gel electrophoresis (DGGE) technique. The bacterial diversity, composition, and principal component were analyzed for Demodex bacteria and the matched facial skin bacteria. The result of mite examination showed that all 33 cases were infected with Demodex folliculorum (D. f), whereas 16 out of the 30 controls were infected with D. f, and the remaining 14 controls were infected with Demodex brevis (D. b). The diversity analysis showed that only evenness index presented statistical difference between mite bacteria and matched skin bacteria in the cases. The composition analysis showed that the DGGE bands of cases and controls were assigned to 12 taxa of 4 phyla, including Proteobacteria (39.37-52.78%), Firmicutes (2.7-26.77%), Actinobacteria (0-5.71%), and Bacteroidetes (0-2.08%). In cases, the proportion of Staphylococcus in Firmicutes was significantly higher than that in D. f controls and D. b controls, while the proportion of Sphingomonas in Proteobacteria was significantly lower than that in D. f controls. The between-group analysis (BGA) showed that all the banding patterns clustered into three groups, namely, D. f cases, D. f controls, and D. b controls. Our study suggests that the bacteria in Demodex should come from the matched facial skin bacteria. Proteobacteria and Firmicutes are the two main taxa. The increase of Staphylococcus and decrease of Sphingomonas might be associated with the development of facial dermatoses.

  6. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  7. Statistical discrimination of footwear: a method for the comparison of accidentals on shoe outsoles inspired by facial recognition techniques.

    PubMed

    Petraco, Nicholas D K; Gambino, Carol; Kubic, Thomas A; Olivio, Dayhana; Petraco, Nicholas

    2010-01-01

    In the field of forensic footwear examination, it is a widely held belief that patterns of accidental marks found on footwear and footwear impressions possess a high degree of "uniqueness." This belief, however, has not been thoroughly studied in a numerical way using controlled experiments. As a result, this form of valuable physical evidence has been the subject of admissibility challenges. In this study, we apply statistical techniques used in facial pattern recognition, to a minimal set of information gleaned from accidental patterns. That is, in order to maximize the amount of potential similarity between patterns, we only use the coordinate locations of accidental marks (on the top portion of a footwear impression) to characterize the entire pattern. This allows us to numerically gauge how similar two patterns are to one another in a worst-case scenario, i.e., in the absence of a tremendous amount of information normally available to the footwear examiner such as accidental mark size and shape. The patterns were recorded from the top portion of the shoe soles (i.e., not the heel) of five shoe pairs. All shoes were the same make and model and all were worn by the same person for a period of 30 days. We found that in 20-30 dimensional principal component (PC) space (99.5% variance retained), patterns from the same shoe, even at different points in time, tended to cluster closer to each other than patterns from different shoes. Correct shoe identification rates using maximum likelihood linear classification analysis and the hold-one-out procedure ranged from 81% to 100%. Although low in variance, three-dimensional PC plots were made and generally corroborated the findings in the much higher dimensional PC-space. This study is intended to be a starting point for future research to build statistical models on the formation and evolution of accidental patterns.

  8. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  9. Influence of maxillary posterior discrepancy on upper molar vertical position and facial vertical dimensions in subjects with or without skeletal open bite

    PubMed Central

    Aliaga-Del Castillo, Aron; Pérez-Vargas, Luis Fernando; Flores-Mir, Carlos

    2016-01-01

    Summary Objectives: To determine the influence of maxillary posterior discrepancy on upper molar vertical position and dentofacial vertical dimensions in individuals with or without skeletal open bite (SOB). Materials and methods: Pre-treatment lateral cephalograms of 139 young adults were examined. The sample was divided into eight groups categorized according to their sagittal and vertical skeletal facial growth pattern and maxillary posterior discrepancy (present or absent). Upper molar vertical position, overbite, lower anterior facial height and facial height ratio were measured. Independent t-test was performed to determine differences between the groups considering maxillary posterior discrepancy. Principal component analysis and MANCOVA test were also used. Results: No statistically significant differences were found comparing the molar vertical position according to maxillary posterior discrepancy for the SOB Class I group or the group with adequate overbite. Significant differences were found in SOB Class II and Class III groups. In addition, an increased molar vertical position was found in the group without posterior discrepancy. Limitations: Some variables closely related with the individual’s intrinsic craniofacial development that could influence the evaluated vertical measurements were not considered. Conclusions and implications: Overall maxillary posterior discrepancy does not appear to have a clear impact on upper molar vertical position or facial vertical dimensions. Only the SOB Class III group without posterior discrepancy had a significant increased upper molar vertical position. PMID:26385786

  10. Influence of maxillary posterior discrepancy on upper molar vertical position and facial vertical dimensions in subjects with or without skeletal open bite.

    PubMed

    Arriola-Guillén, Luis Ernesto; Aliaga-Del Castillo, Aron; Pérez-Vargas, Luis Fernando; Flores-Mir, Carlos

    2016-06-01

    To determine the influence of maxillary posterior discrepancy on upper molar vertical position and dentofacial vertical dimensions in individuals with or without skeletal open bite (SOB). Pre-treatment lateral cephalograms of 139 young adults were examined. The sample was divided into eight groups categorized according to their sagittal and vertical skeletal facial growth pattern and maxillary posterior discrepancy (present or absent). Upper molar vertical position, overbite, lower anterior facial height and facial height ratio were measured. Independent t-test was performed to determine differences between the groups considering maxillary posterior discrepancy. Principal component analysis and MANCOVA test were also used. No statistically significant differences were found comparing the molar vertical position according to maxillary posterior discrepancy for the SOB Class I group or the group with adequate overbite. Significant differences were found in SOB Class II and Class III groups. In addition, an increased molar vertical position was found in the group without posterior discrepancy. Some variables closely related with the individual's intrinsic craniofacial development that could influence the evaluated vertical measurements were not considered. Overall maxillary posterior discrepancy does not appear to have a clear impact on upper molar vertical position or facial vertical dimensions. Only the SOB Class III group without posterior discrepancy had a significant increased upper molar vertical position. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. The role of facial appearance on CEO selection after firm misconduct.

    PubMed

    Gomulya, David; Wong, Elaine M; Ormiston, Margaret E; Boeker, Warren

    2017-04-01

    [Correction Notice: An Erratum for this article was reported in Vol 102(4) of Journal of Applied Psychology (see record 2017-10684-001). The wrong figure files were used. All versions of this article have been corrected.] We investigate a particular aspect of CEO successor trustworthiness that may be critically important after a firm has engaged in financial misconduct. Specifically, drawing on prior research that suggests that facial appearance is one critical way in which trustworthiness is signaled, we argue that leaders who convey integrity, a component of trustworthiness, will be more likely to be selected as successors after financial restatement. We predict that such appointments garner more positive reactions by external observers such as investment analysts and the media because these CEOs are perceived as having greater integrity. In an archival study of firms that have announced financial restatements, we find support for our predictions. These findings have implications for research on CEO succession, leadership selection, facial appearance, and firm misconduct. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion.

    PubMed

    Diéguez-Risco, Teresa; Aguado, Luis; Albert, Jacobo; Hinojosa, José Antonio

    2015-12-01

    The influence of explicit evaluative processes on the contextual integration of facial expressions of emotion was studied in a procedure that required the participants to judge the congruency of happy and angry faces with preceding sentences describing emotion-inducing situations. Judgments were faster on congruent trials in the case of happy faces and on incongruent trials in the case of angry faces. At the electrophysiological level, a congruency effect was observed in the face-sensitive N170 component that showed larger amplitudes on incongruent trials. An interactive effect of congruency and emotion appeared on the LPP (late positive potential), with larger amplitudes in response to happy faces that followed anger-inducing situations. These results show that the deliberate intention to judge the contextual congruency of facial expressions influences not only processes involved in affective evaluation such as those indexed by the LPP but also earlier processing stages that are involved in face perception. Copyright © 2015. Published by Elsevier B.V.

  13. Perceiving emotions in neutral faces: expression processing is biased by affective person knowledge.

    PubMed

    Suess, Franziska; Rabovsky, Milena; Abdel Rahman, Rasha

    2015-04-01

    According to a widely held view, basic emotions such as happiness or anger are reflected in facial expressions that are invariant and uniquely defined by specific facial muscle movements. Accordingly, expression perception should not be vulnerable to influences outside the face. Here, we test this assumption by manipulating the emotional valence of biographical knowledge associated with individual persons. Faces of well-known and initially unfamiliar persons displaying neutral expressions were associated with socially relevant negative, positive or comparatively neutral biographical information. The expressions of faces associated with negative information were classified as more negative than faces associated with neutral information. Event-related brain potential modulations in the early posterior negativity, a component taken to reflect early sensory processing of affective stimuli such as emotional facial expressions, suggest that negative affective knowledge can bias the perception of faces with neutral expressions toward subjectively displaying negative emotions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation.

    PubMed

    Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan

    2016-06-01

    Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.

  15. Characteristics of ballistic and blast injuries.

    PubMed

    Powers, David B; Delo, Robert I

    2013-03-01

    Ballistic injury wounds are formed by variable interrelated factors, such as the nature of the tissue, the compositional makeup of the bullet, distance to the target, and the velocity, shape, and mass of the of the projectile. This complex arrangement, with the ultimate outcome dependent on each other, makes the prediction of wounding potential difficult to assess. As the facial features are the component of the body most involved in a patient's personality and interaction with society, preservation of form, cosmesis, and functional outcome should remain the primary goals in the management of ballistic injury. A logical, sequential analysis of the injury patterns to the facial complex is an absolutely necessary component for the treatment of craniomaxillofacial ballistic injuries. Fortunately, these skill sets should be well honed in all craniomaxillofacial surgeons through their exposure to generalized trauma, orthognathic, oncologic, and cosmetic surgery patients. Identification of injured tissues, understanding the functional limitations of these injuries, and preservation of both hard and soft tissues minimizing the need for tissue replacement are paramount.

  16. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  17. Brain response during the M170 time interval is sensitive to socially relevant information.

    PubMed

    Arviv, Oshrit; Goldstein, Abraham; Weeting, Janine C; Becker, Eni S; Lange, Wolf-Gero; Gilboa-Schechtman, Eva

    2015-11-01

    Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Pre-operative Screening and Manual Drilling Strategies to Reduce the Risk of Thermal Injury During Minimally Invasive Cochlear Implantation Surgery.

    PubMed

    Dillon, Neal P; Fichera, Loris; Kesler, Kyle; Zuniga, M Geraldine; Mitchell, Jason E; Webster, Robert J; Labadie, Robert F

    2017-09-01

    This article presents the development and experimental validation of a methodology to reduce the risk of thermal injury to the facial nerve during minimally invasive cochlear implantation surgery. The first step in this methodology is a pre-operative screening process, in which medical imaging is used to identify those patients that present a significant risk of developing high temperatures at the facial nerve during the drilling phase of the procedure. Such a risk is calculated based on the density of the bone along the drilling path and the thermal conductance between the drilling path and the nerve, and provides a criterion to exclude high-risk patients from receiving the minimally invasive procedure. The second component of the methodology is a drilling strategy for manually-guided drilling near the facial nerve. The strategy utilizes interval drilling and mechanical constraints to enable better control over the procedure and the resulting generation of heat. The approach is tested in fresh cadaver temporal bones using a thermal camera to monitor temperature near the facial nerve. Results indicate that pre-operative screening may successfully exclude high-risk patients and that the proposed drilling strategy enables safe drilling for low-to-moderate risk patients.

  19. Facial patterns in a tropical social wasp correlate with colony membership

    NASA Astrophysics Data System (ADS)

    Baracchi, David; Turillazzi, Stefano; Chittka, Lars

    2016-10-01

    Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.

  20. "The role of facial appearance on CEO selection after firm misconduct:" Correction to Gomulya et al. (2016).

    PubMed

    2017-04-01

    Reports an error in "The Role of Facial Appearance on CEO Selection After Firm Misconduct" by David Gomulya, Elaine M. Wong, Margaret E. Ormiston and Warren Boeker ( Journal of Applied Psychology , Advanced Online Publication, Dec 19, 2016, np). The wrong figure files were used. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-60831-001.) We investigate a particular aspect of CEO successor trustworthiness that may be critically important after a firm has engaged in financial misconduct. Specifically, drawing on prior research that suggests that facial appearance is one critical way in which trustworthiness is signaled, we argue that leaders who convey integrity, a component of trustworthiness, will be more likely to be selected as successors after financial restatement. We predict that such appointments garner more positive reactions by external observers such as investment analysts and the media because these CEOs are perceived as having greater integrity. In an archival study of firms that have announced financial restatements, we find support for our predictions. These findings have implications for research on CEO succession, leadership selection, facial appearance, and firm misconduct. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.

    PubMed

    Brechet, Claire

    2017-01-01

    The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.

  2. Mathematical problems in the application of multilinear models to facial emotion processing experiments

    NASA Astrophysics Data System (ADS)

    Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.

    2000-10-01

    In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.

  3. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  4. Application of principal component analysis to distinguish patients with schizophrenia from healthy controls based on fractional anisotropy measurements.

    PubMed

    Caprihan, A; Pearlson, G D; Calhoun, V D

    2008-08-15

    Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.

  5. Tensor manifold-based extreme learning machine for 2.5-D face recognition

    NASA Astrophysics Data System (ADS)

    Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin

    2018-01-01

    We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.

  6. A semi-supervised classification algorithm using the TAD-derived background as training data

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  7. Osteochondroma of the mandibular condyle: a classification system based on computed tomographic appearances.

    PubMed

    Chen, Min-jie; Yang, Chi; Qiu, Ya-ting; Zhou, Qin; Huang, Dong; Shi, Hui-min

    2014-09-01

    The objectives of this study were to introduce the classification of osteochondroma of the mandibular condyle based on computed tomographic images and to present our treatment experiences. From January 2002 and December 2012, a total of 61 patients with condylar osteochondroma were treated in our division. Both clinical and radiologic aspects were reviewed. The average follow-up period was 24.3 months with a range of 6 to 120 months. Two types of condylar osteochondroma were presented: type 1 (protruding expansion) in 50 patients (82.0%) and type 2 (globular expansion) in 11 patients (18.0%). Type 1 condylar osteochondroma presented 5 forms: anterior/anteromedial (58%), posterior/posteromedial (6%), medial (16%), lateral (6%), and gigantic (14%). Local resection was performed on patients with type 1 condylar osteochondroma. Subtotal condylectomy/total condylectomy using costochondral graft reconstruction with/without orthognathic surgeries was performed on patients with type 2 condylar osteochondroma. During the follow-up period, tumor reformation, condyle absorption, and new deformity were not detected. The patients almost reattained facial symmetry. Preoperative classification based on computed tomographic images will help surgeons to choose the suitable surgical procedure to treat the condylar osteochondroma.

  8. Skull Base Erosion Resulting From Primary Tumors of the Temporomandibular Joint and Skull Base Region: Our Classification and Reconstruction Experience.

    PubMed

    Chen, Min-Jie; Yang, Chi; Zheng, Ji-Si; Bai, Guo; Han, Zi-Xiang; Wang, Yi-Wen

    2018-06-01

    We sought to introduce our classification and reconstruction protocol for skull base erosions in the temporomandibular joint and skull base region. Patients with neoplasms in the temporomandibular joint and skull base region treated from January 2006 to March 2017 were reviewed. Skull base erosion was classified into 3 types according to the size of the defect. We included 33 patients, of whom 5 (15.2%) had type I defects (including 3 in whom free fat grafts were placed and 2 in whom deep temporal fascial fat flaps were placed). There were 8 patients (24.2%) with type II defects, all of whom received deep temporal fascial fat flaps. A total of 20 patients (60.6%) had type III defects, including 17 in whom autogenous bone grafts were placed, 1 in whom titanium mesh was placed, and 2 who received total alloplastic joints. The mean follow-up period was 50 months. All of the patients exhibited stable occlusion and good facial symmetry. No recurrence was noted. Our classification and reconstruction principles allowed reliable morpho-functional skull base reconstruction. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Automated reuseable components system study results

    NASA Technical Reports Server (NTRS)

    Gilroy, Kathy

    1989-01-01

    The Automated Reusable Components System (ARCS) was developed under a Phase 1 Small Business Innovative Research (SBIR) contract for the U.S. Army CECOM. The objectives of the ARCS program were: (1) to investigate issues associated with automated reuse of software components, identify alternative approaches, and select promising technologies, and (2) to develop tools that support component classification and retrieval. The approach followed was to research emerging techniques and experimental applications associated with reusable software libraries, to investigate the more mature information retrieval technologies for applicability, and to investigate the applicability of specialized technologies to improve the effectiveness of a reusable component library. Various classification schemes and retrieval techniques were identified and evaluated for potential application in an automated library system for reusable components. Strategies for library organization and management, component submittal and storage, and component search and retrieval were developed. A prototype ARCS was built to demonstrate the feasibility of automating the reuse process. The prototype was created using a subset of the classification and retrieval techniques that were investigated. The demonstration system was exercised and evaluated using reusable Ada components selected from the public domain. A requirements specification for a production-quality ARCS was also developed.

  10. The Role of the Limbic System in Human Communication.

    ERIC Educational Resources Information Center

    Lamendella, John T.

    Linguistics has chosen as its niche the language component of human communication and, naturally enough, the neurolinguist has concentrated on lateralized language systems of the cerebral hemispheres. However, decoding a speaker's total message requires attention to gestures, facial expressions, and prosodic features, as well as other somatic and…

  11. Similar exemplar pooling processes underlie the learning of facial identity and handwriting style: Evidence from typical observers and individuals with Autism.

    PubMed

    Ipser, Alberta; Ring, Melanie; Murphy, Jennifer; Gaigg, Sebastian B; Cook, Richard

    2016-05-01

    Considerable research has addressed whether the cognitive and neural representations recruited by faces are similar to those engaged by other types of visual stimuli. For example, research has examined the extent to which objects of expertise recruit holistic representation and engage the fusiform face area. Little is known, however, about the domain-specificity of the exemplar pooling processes thought to underlie the acquisition of familiarity with particular facial identities. In the present study we sought to compare observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars. Crucially, while handwritten words and faces differ considerably in their topographic form, both learning tasks share a common exemplar pooling component. In our first experiment, we find that typical observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars correlates closely. In our second experiment, we show that observers with Autism Spectrum Disorder (ASD) are impaired at both learning tasks. Our findings suggest that similar exemplar pooling processes are recruited when learning facial identities and handwriting styles. Models of exemplar pooling originally developed to explain face learning, may therefore offer valuable insights into exemplar pooling across a range of domains, extending beyond faces. Aberrant exemplar pooling, possibly resulting from structural differences in the inferior longitudinal fasciculus, may underlie difficulties recognising familiar faces often experienced by individuals with ASD, and leave observers overly reliant on local details present in particular exemplars. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study.

    PubMed

    Mavratzakis, Aimee; Herbert, Cornelia; Walla, Peter

    2016-01-01

    In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250ms) than for scenes (500ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Non-invasive stimulation of the vibrissal pad improves recovery of whisking function after simultaneous lesion of the facial and infraorbital nerves in rats.

    PubMed

    Bendella, H; Pavlov, S P; Grosheva, M; Irintchev, A; Angelova, S K; Merkel, D; Sinis, N; Kaidoglou, K; Skouras, E; Dunlop, S A; Angelov, Doychin N

    2011-07-01

    We have recently shown that manual stimulation of target muscles promotes functional recovery after transection and surgical repair to pure motor nerves (facial: whisking and blink reflex; hypoglossal: tongue position). However, following facial nerve repair, manual stimulation is detrimental if sensory afferent input is eliminated by, e.g., infraorbital nerve extirpation. To further understand the interplay between sensory input and motor recovery, we performed simultaneous cut-and-suture lesions on both the facial and the infraorbital nerves and examined whether stimulation of the sensory afferents from the vibrissae by a forced use would improve motor recovery. The efficacy of 3 treatment paradigms was assessed: removal of the contralateral vibrissae to ensure a maximal use of the ipsilateral ones (vibrissal stimulation; Group 2), manual stimulation of the ipsilateral vibrissal muscles (Group 3), and vibrissal stimulation followed by manual stimulation (Group 4). Data were compared to controls which underwent surgery but did not receive any treatment (Group 1). Four months after surgery, all three treatments significantly improved the amplitude of vibrissal whisking to 30° versus 11° in the controls of Group 1. The three treatments also reduced the degree of polyneuronal innervation of target muscle fibers to 37% versus 58% in Group 1. These findings indicate that forced vibrissal use and manual stimulation, either alone or sequentially, reduce target muscle polyinnervation and improve recovery of whisking function when both the sensory and the motor components of the trigemino-facial system regenerate.

  14. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging.

    PubMed

    Hsu, Vivian M; Wes, Ari M; Tahiri, Youssef; Cornman-Homonoff, Joshua; Percec, Ivona

    2014-09-01

    The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  15. Derivation of simple rules for complex flow vector fields on the lower part of the human face for robot face design.

    PubMed

    Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru

    2017-11-27

    It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.

  16. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  17. European validation of The Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis from the perspective of patients with osteoarthritis of the knee or hip.

    PubMed

    Weigl, Martin; Wild, Heike

    2017-09-15

    To validate the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis from the patient perspective in Europe. This multicenter cross-sectional study involved 375 patients with knee or hip osteoarthritis. Trained health professionals completed the Comprehensive Core Set, and patients completed the Short-Form 36 questionnaire. Content validity was evaluated by calculating prevalences of impairments in body function and structures, limitations in activities and participation and environmental factors, which were either barriers or facilitators. Convergent construct validity was evaluated by correlating the International Classification of Functioning, Disability and Health categories with the Short-Form 36 Physical Component Score and the SF-36 Mental Component Score in a subgroup of 259 patients. The prevalences of all body function, body structure and activities and participation categories were >40%, >32% and >20%, respectively, and all environmental factors were relevant for >16% of patients. Few categories showed relevant differences between knee and hip osteoarthritis. All body function categories and all but two activities and participation categories showed significant correlations with the Physical Component Score. Body functions from the ICF chapter Mental Functions showed higher correlations with the Mental Component Score than with the Physical Component Score. This study supports the validity of the International Classification of Functioning, Disability and Health Comprehensive Core Set for Osteoarthritis. Implications for Rehabilitation Comprehensive International Classification of Functioning, Disability and Health Core Sets were developed as practical tools for application in multidisciplinary assessments. The validity of the Comprehensive International Classification of Functioning, Disability and Health Core Set for Osteoarthritis in this study supports its application in European patients with osteoarthritis. The differences in results between this Europe validation study and a previous Singaporean validation study underscore the need to validate the International Classification of Functioning, Disability and Health Core Sets in different regions of the world.

  18. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  19. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  20. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  1. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  2. Characterization of craniomaxillofacial battle injuries sustained by United States service members in the current conflicts of Iraq and Afghanistan.

    PubMed

    Lew, Timothy A; Walker, John A; Wenke, Joseph C; Blackbourne, Lorne H; Hale, Robert G

    2010-01-01

    To characterize and describe the craniomaxillofacial (CMF) battlefield injuries sustained by US Service Members in Operation Iraqi Freedom and Operation Enduring Freedom. The Joint Theater Trauma Registry was queried from October 19, 2001, to December 11, 2007, for CMF battlefield injuries. The CMF injuries were identified using the "International Classification of Diseases, Ninth Revision, Clinical Modification" codes and the data compiled for battlefield injury service members. Nonbattlefield injuries, killed in action, and return to duty cases were excluded. CMF battlefield injuries were found in 2,014 of the 7,770 battlefield-injured US service members. In the 2,014 injured service members were 4,783 CMF injuries (2.4 injuries per soldier). The incidence of CMF battlefield injuries by branch of service was Army, 72%; Marines, 24%; Navy, 2%; and Air Force, 1%. The incidence of penetrating soft-tissue injuries and fractures was 58% and 27%, respectively. Of the fractures, 76% were open. The location of the facial fractures was the mandible in 36%, maxilla/zygoma in 19%, nasal in 14%, and orbit in 11%. The remaining 20% were not otherwise specified. The primary mechanism of injury involved explosive devices (84%). Of the injured US service members, 26% had injuries to the CMF region in the Operation Iraqi Freedom/Operation Enduring Freedom conflicts during a 6-year period. Multiple penetrating soft-tissue injuries and fractures caused by explosive devices were frequently seen. Increased survivability because of body armor, advanced battlefield medicine, and the increased use of explosive devices is probably related to the elevated incidence of CMF battlefield injuries. The current use of "International Classification of Diseases, Ninth Revision, Clinical Modification" codes with the Joint Theater Trauma Registry failed to characterize the severity of facial wounds.

  3. A comparison of autonomous techniques for multispectral image analysis and classification

    NASA Astrophysics Data System (ADS)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  4. Physiology-based face recognition in the thermal infrared spectrum.

    PubMed

    Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike

    2007-04-01

    The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area.

  5. Cognitive behavioural therapy attenuates the enhanced early facial stimuli processing in social anxiety disorders: an ERP investigation.

    PubMed

    Cao, Jianqin; Liu, Quanying; Li, Yang; Yang, Jun; Gu, Ruolei; Liang, Jin; Qi, Yanyan; Wu, Haiyan; Liu, Xun

    2017-07-28

    Previous studies of patients with social anxiety have demonstrated abnormal early processing of facial stimuli in social contexts. In other words, patients with social anxiety disorder (SAD) tend to exhibit enhanced early facial processing when compared to healthy controls. Few studies have examined the temporal electrophysiological event-related potential (ERP)-indexed profiles when an individual with SAD compares faces to objects in SAD. Systematic comparisons of ERPs to facial/object stimuli before and after therapy are also lacking. We used a passive visual detection paradigm with upright and inverted faces/objects, which are known to elicit early P1 and N170 components, to study abnormal early face processing and subsequent improvements in this measure in patients with SAD. Seventeen patients with SAD and 17 matched control participants performed a passive visual detection paradigm task while undergoing EEG. The healthy controls were compared to patients with SAD pre-therapy to test the hypothesis that patients with SAD have early hypervigilance to facial cues. We compared patients with SAD before and after therapy to test the hypothesis that the early hypervigilance to facial cues in patients with SAD can be alleviated. Compared to healthy control (HC) participants, patients with SAD had more robust P1-N170 slope but no amplitude effects in response to both upright and inverted faces and objects. Interestingly, we found that patients with SAD had reduced P1 responses to all objects and faces after therapy, but had selectively reduced N170 responses to faces, and especially inverted faces. Interestingly, the slope from P1 to N170 in patients with SAD was flatter post-therapy than pre-therapy. Furthermore, the amplitude of N170 evoked by the facial stimuli was correlated with scores on the interaction anxiousness scale (IAS) after therapy. Our results did not provide electrophysiological support for the early hypervigilance hypothesis in SAD to faces, but confirm that cognitive-behavioural therapy can reduce the early visual processing of faces. These findings have potentially important therapeutic implications in the assessment and treatment of social anxiety. Trial registration HEBDQ2014021.

  6. The use of three-dimensional imaging to evaluate the effect of conventional orthodontic approach in treating a subject with facial asymmetry

    PubMed Central

    Kheir, Nadia Abou; Kau, Chung How

    2016-01-01

    The growth of the craniofacial skeleton takes place from the 3rd week of intra-uterine life until 18 years of age. During this period, the craniofacial complex is affected by extrinsic and intrinsic factors which guide or alter the pattern of growth. Asymmetry can be encountered due to these multifactorial effects or as the normal divergence of the hemifacial counterpart occurs. At present, an orthodontist plays a major role not only in diagnosing dental asymmetry but also facial asymmetry. However, an orthodontist's role in treating or camouflaging the asymmetry can be limited due to the severity. The aim of this research is to report a technique for facial three-dimensional (3D) analysis used to measure the progress of nonsurgical orthodontic treatment approach for a subject with maxillary asymmetry combined with mandibular angular asymmetry. The facial analysis was composed of five parts: Upper face asymmetry analysis, maxillary analysis, maxillary cant analysis, mandibular cant analysis, and mandibular asymmetry analysis which were applied using 3D software InVivoDental 5.2.3 (Anatomage Company, San Jose, CA, USA). The five components of the facial analysis were applied in the initial cone-beam computed tomography (T1) for diagnosis. Maxillary analysis, maxillary cant analysis, and mandibular cant analysis were applied to measure the progress of the orthodontics treatment (T2). Twenty-two linear measurements bilaterally and sixteen angular criteria were used to analyze the facial structures using different anthropometric landmarks. Only angular mandibular asymmetry was reported. However, the subject had maxillary alveolar ridge cant of 9.96°and dental maxillary cant was 2.95° in T1. The mandibular alveolar ridge cant was 7.41° and the mandibular dental cant was 8.39°. Highest decrease in the cant was reported maxillary alveolar ridge around 2.35° and in the mandibular alveolar ridge around 3.96° in T2. Facial 3D analysis is considered a useful adjunct in evaluating inter-arch biomechanics. PMID:27563618

  7. A Systematic Classification for HVAC Systems and Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Han; Chen, Yan; Zhang, Jian

    Depending on the application, the complexity of an HVAC system can range from a small fan coil unit to a large centralized air conditioning system with primary and secondary distribution loops, and central plant components. Currently, the taxonomy of HVAC systems and the components has various aspects, which can get quite complex because of the various components and system configurations. For example, based on cooling and heating medium delivered to terminal units, systems can be classified as either air systems, water systems or air-water systems. In addition, some of the system names might be commonly used in a confusing manner,more » such as “unitary system” vs. “packaged system.” Without a systematic classification, these components and system terminology can be confusing to understand or differentiate from each other, and it creates ambiguity in communication, interpretation, and documentation. It is valuable to organize and classify HVAC systems and components so that they can be easily understood and used in a consistent manner. This paper aims to develop a systematic classification of HVAC systems and components. First, we summarize the HVAC component information and definitions based on published literature, such as ASHRAE handbooks, regulations, and rating standards. Then, we identify common HVAC system types and map them to the collected components in a meaningful way. Classification charts are generated and described based on the component information. Six main categories are identified for the HVAC components and equipment, i.e., heating and cooling production, heat extraction and rejection, air handling process, distribution system, terminal use, and stand-alone system. Components for each main category are further analyzed and classified in detail. More than fifty system names are identified and grouped based on their characteristics. The result from this paper will be helpful for education, communication, and systems and component documentation.« less

  8. Middle and inner ear malformations in mutation-proven branchio-oculo-facial (BOF) syndrome: case series and review of the literature.

    PubMed

    Carter, Melissa T; Blaser, Susan; Papsin, Blake; Meschino, Wendy; Reardon, Willie; Klatt, Regan; Babul-Hirji, Riyana; Milunsky, Jeff; Chitayat, David

    2012-08-01

    Hearing impairment is common in individuals with branchio-oculo-facial (BOF) syndrome. The majority of described individuals have conductive hearing impairment due to malformed ossicles and/or external canal stenosis or atresia, although a sensorineural component to the hearing impairment in BOF syndrome is increasingly being reported. Sophisticated computed tomography (CT) of the temporal bone has revealed middle and inner ear malformations in three previous reports. We present middle and inner ear abnormalities in three additional individuals with mutation-proven BOF syndrome. We suggest that temporal bone CT imaging be included in the medical workup of a child with BOF syndrome, in order to guide management. Copyright © 2012 Wiley Periodicals, Inc.

  9. Implicit Binding of Facial Features During Change Blindness

    PubMed Central

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  10. Implicit binding of facial features during change blindness.

    PubMed

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  11. Impact of Social Cognition on Alcohol Dependence Treatment Outcome: Poorer Facial Emotion Recognition Predicts Relapse/Dropout.

    PubMed

    Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang

    2017-12-01

    Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of treatment. Impaired facial emotion recognition represents a neurocognitive risk factor that should be taken into account in alcohol dependence treatment. Treatments targeting the improvement of these social cognition deficits in AUD may offer a promising future approach. Copyright © 2017 by the Research Society on Alcoholism.

  12. The effect of Ramadan fasting on spatial attention through emotional stimuli

    PubMed Central

    Molavi, Maziyar; Yunus, Jasmy; Utama, Nugraha P

    2016-01-01

    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities – before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing. PMID:27307772

  13. Intelligence, Surveillance, and Reconnaissance Fusion for Coalition Operations

    DTIC Science & Technology

    2008-07-01

    classification of the targets of interest. The MMI features extracted in this manner have two properties that provide a sound justification for...are generalizations of well- known feature extraction methods such as Principal Components Analysis (PCA) and Independent Component Analysis (ICA...augment (without degrading performance) a large class of generic fusion processes. Ontologies Classifications Feature extraction Feature analysis

  14. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    PubMed

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  15. Anatomical evidence regarding the existence of sustentaculum facies.

    PubMed

    Frâncu, L L; Hînganu, Delia; Hînganu, M V

    2013-01-01

    The face, seen as a unitary region is subject to the gravitational force. Since it is the main relational and socialization region of each individual, it presents unique ways of suspension. The elevation system of the face is complex, and it includes four different elements: the continuity with the epicranial fascia, the adhesion of superficial structures to the peri- and inter-orbital mimic muscles, ligaments adhesions and fixing ligaments of the superficial layers to the zygomatic process, and also to the facial fat pad. Each of these four elements were evaluated on 12 cephalic extremities, dissected in detail, layer by layer, and the images were captured with an informatics system connected to an operating microscope. The purchased mesoscopic images revealed the presence of a superficial musculo-aponeurotic system (SMAS) through which the anti-gravity suspension of the superficial facial structures become possible. This system acts against face aging and all four elevation structures form what the so-called sustentaculum facies. The participation of each of the four anatomic components and their approach in the facial rejuvenation surgeries are here in discussion.

  16. Face aging effect simulation model based on multilayer representation and shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Yuancheng; Li, Yan

    2017-09-01

    In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.

  17. Activity in the human brain predicting differential heart rate responses to emotional facial expressions.

    PubMed

    Critchley, Hugo D; Rotshtein, Pia; Nagai, Yoko; O'Doherty, John; Mathias, Christopher J; Dolan, Raymond J

    2005-02-01

    The James-Lange theory of emotion proposes that automatically generated bodily reactions not only color subjective emotional experience of stimuli, but also necessitate a mechanism by which these bodily reactions are differentially generated to reflect stimulus quality. To examine this putative mechanism, we simultaneously measured brain activity and heart rate to identify regions where neural activity predicted the magnitude of heart rate responses to emotional facial expressions. Using a forewarned reaction time task, we showed that orienting heart rate acceleration to emotional face stimuli was modulated as a function of the emotion depicted. The magnitude of evoked heart rate increase, both across the stimulus set and within each emotion category, was predicted by level of activity within a matrix of interconnected brain regions, including amygdala, insula, anterior cingulate, and brainstem. We suggest that these regions provide a substrate for translating visual perception of emotional facial expression into differential cardiac responses and thereby represent an interface for selective generation of visceral reactions that contribute to the embodied component of emotional reaction.

  18. Facial Fractures: Pearls and Perspectives.

    PubMed

    Chaudhry, Obaid; Isakson, Matthew; Franklin, Adam; Maqusi, Suhair; El Amm, Christian

    2018-05-01

    After studying this article, the participant should be able to: 1. Describe the A-frame configuration of anterior facial buttresses, recognize the importance of restoring anterior projection in frontal sinus fractures, and describe an alternative design and donor site of pericranial flaps in frontal sinus fractures. 2. Describe the symptoms and cause of pseudo-Brown syndrome, describe the anatomy and placement of a buttress-spanning plate in nasoorbitoethmoid fractures, and identify appropriate nasal support alternatives for nasoorbitoethmoid fractures. 3. Describe the benefits and disadvantages of different lower lid approaches to the orbital floor and inferior rim, identify late exophthalmos as a complication of reconstructing the orbital floor with nonporous alloplast, and select implant type and size for correction of secondary enophthalmos. 4. Describe closed reduction of low-energy zygomatic body fractures with the Gillies approach and identify situations where internal fixation may be unnecessary, identify situations where plating the inferior orbital rim may be avoided, and select fixation points for osteosynthesis of uncomplicated displaced zygomatic fractures. 5. Understand indications and complications of use for intermaxillary screw systems, understand sequencing panfacial fractures, describe the sulcular approach to mandible fractures, and describe principles and techniques of facial reconstruction after self-inflicted firearm injuries. Treating patients with facial trauma remains a core component of plastic surgery and a significant part of the value of a plastic surgeon to a health system.

  19. The neural correlates of internal and external comparisons: an fMRI study.

    PubMed

    Wen, Xue; Xiang, Yanhui; Cant, Jonathan S; Wang, Tingting; Cupchik, Gerald; Huang, Ruiwang; Mo, Lei

    2017-01-01

    Many previous studies have suggested that various comparisons rely on the same cognitive and neural mechanisms. However, little attention has been paid to exploring the commonalities and differences between the internal comparison based on concepts or rules and the external comparison based on perception. In the present experiment, moral beauty comparison and facial beauty comparison were selected as the representatives of internal comparison and external comparison, respectively. Functional magnetic resonance imaging (fMRI) was used to record brain activity while participants compared the level of moral beauty of two scene drawings containing moral acts or the level of facial beauty of two face photos. In addition, a physical size comparison task with the same stimuli as the beauty comparison was included. We observed that both the internal moral beauty comparison and external facial beauty comparison obeyed a typical distance effect and this behavioral effect recruited a common frontoparietal network involved in comparisons of simple physical magnitudes such as size. In addition, compared to external facial beauty comparison, internal moral beauty comparison induced greater activity in more advanced and complex cortical regions, such as the bilateral middle temporal gyrus and middle occipital gyrus, but weaker activity in the putamen, a subcortical region. Our results provide novel neural evidence for the comparative process and suggest that different comparisons may rely on both common cognitive processes as well as distinct and specific cognitive components.

  20. Effects of facial attractiveness on personality stimuli in an implicit priming task: an ERP study.

    PubMed

    Zhang, Yan; Zheng, Minxiao; Wang, Xiaoying

    2016-08-01

    Using event-related potentials (ERPs) in a priming paradigm, this study examines implicit priming in the association of personality words with facial attractiveness. A total of 16 participants (8 males and 8 females; age range, 19-24 years; mean age, 21.30 years) were asked to judge the color (red and green) of positive or negative personality words after exposure to priming stimuli (attractive and unattractive facial images). The positive personality words primed by attractive faces or the negative personality words primed by unattractive faces were defined as congruent trials, whereas the positive personality words primed by unattractive faces or the negative personality words primed by attractive faces were defined as incongruent trials. Behavioral results showed that compared with the unattractive faces trials, the trials that attractive faces being the priming stimuli had longer reaction times and higher accuracy rates. Moreover, a more negative ERP deflection (N2) component was observed in the ERPs of the incongruent condition than in the ERPs of the congruent condition. In addition, the personality words presented after the attractive faces elicited larger amplitudes from the frontal region to the central region (P2 and P350-550 ms) compared with the personality words after unattractive faces as priming stimuli. The study provides evidence for the facial attractiveness stereotype ('What is beautiful is good') through an implicit priming task.

  1. External auditory canal cholesteatoma and keratosis obturans: the role of imaging in preventing facial nerve injury.

    PubMed

    McCoul, Edward D; Hanson, Matthew B

    2011-12-01

    We conducted a retrospective study to compare the clinical characteristics of external auditory canal cholesteatoma (EACC) with those of a similar entity, keratosis obturans (KO). We also sought to identify those aspects of each disease that may lead to complications. We identified 6 patients in each group. Imaging studies were reviewed for evidence of bony erosion and the proximity of disease to vital structures. All 6 patients in the EACC group had their diagnosis confirmed by computed tomography (CT), which demonstrated widening of the bony external auditory canal; 4 of these patients had critical erosion of bone adjacent to the facial nerve. Of the 6 patients with KO, only 2 had undergone CT, and neither exhibited any significant bony erosion or expansion; 1 of them developed osteomyelitis of the temporal bone and adjacent temporomandibular joint. Another patient manifested KO as part of a dermatophytid reaction. The essential component of treatment in all cases of EACC was microscopic debridement of the ear canal. We conclude that EACC may produce significant erosion of bone with exposure of vital structures, including the facial nerve. Because of the clinical similarity of EACC to KO, misdiagnosis is possible. Temporal bone imaging should be obtained prior to attempts at debridement of suspected EACC. Increased awareness of these uncommon conditions is warranted to prompt appropriate investigation and prevent iatrogenic complications such as facial nerve injury.

  2. Effect of empathy trait on attention to various facial expressions: evidence from N170 and late positive potential (LPP)

    PubMed Central

    2014-01-01

    Background The present study sought to clarify the relationship between empathy trait and attention responses to happy, angry, surprised, afraid, and sad facial expressions. As indices of attention, we recorded event-related potentials (ERP) and focused on N170 and late positive potential (LPP) components. Methods Twenty-two participants (12 males, 10 females) discriminated facial expressions (happy, angry, surprised, afraid, and sad) from emotionally neutral faces under an oddball paradigm. The empathy trait of participants was measured using the Interpersonal Reactivity Index (IRI, J Pers Soc Psychol 44:113–126, 1983). Results Participants with higher IRI scores showed: 1) more negative amplitude of N170 (140 to 200 ms) in the right posterior temporal area elicited by happy, angry, surprised, and afraid faces; 2) more positive amplitude of early LPP (300 to 600 ms) in the parietal area elicited in response to angry and afraid faces; and 3) more positive amplitude of late LPP (600 to 800 ms) in the frontal area elicited in response to happy, angry, surprised, afraid, and sad faces, compared to participants with lower IRI scores. Conclusions These results suggest that individuals with high empathy pay attention to various facial expressions more than those with low empathy, from very-early stage (reflected in N170) to late-stage (reflected in LPP) processing of faces. PMID:24975115

  3. Automatic mimicry reactions as related to differences in emotional empathy.

    PubMed

    Sonnby-Borgström, Marianne

    2002-12-01

    The hypotheses of this investigation were derived by conceiving of automatic mimicking as a component of emotional empathy. Differences between subjects high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity when subjects were exposed to pictures of angry or happy faces, and the degree of correspondence between subjects' facial EMG reactions and their self-reported feelings. The comparisons were made at different stimulus exposure times in order to elicit reactions at different levels of information processing. The high-empathy subjects were found to have a higher degree of mimicking behavior than the low-empathy subjects, a difference that emerged at short exposure times (17-40 ms) that represented automatic reactions. The low-empathy subjects tended already at short exposure times (17-40 ms) to show inverse zygomaticus muscle reactions, namely "smiling" when exposed to an angry face. The high-empathy group was characterized by a significantly higher correspondence between facial expressions and self-reported feelings. No differences were found between the high- and low-empathy subjects in their verbally reported feelings when presented a happy or an angry face. Thus, the differences between the groups in emotional empathy appeared to be related to differences in automatic somatic reactions to facial stimuli rather than to differences in their conscious interpretation of the emotional situation.

  4. Is it possible to define the ideal lips?

    PubMed

    Kar, M; Muluk, N B; Bafaqeeh, S A; Cingi, C

    2018-02-01

    The lips are an essential component of the symmetry and aesthetics of the face. Cosmetic surgery to modify the lips has recently gained in popularity, but the results are in some cases disasterous. In this review, we describe the features of the ideal lips for an individual's face. The features of the ideal lips with respect to facial anatomy, important anatomical landmarks of the face, the facial proportions of the lips and ethnic and sexual differences are described. The projection and relative sizes of the upper and lower lips are as significant to lip aesthetics as the proportion of the lips to the rest of the facial structure. Robust, pouty lips are considered to be sexually attractive by both males and females. Horizontal thirds and the golden ratio describe the proportions that contribute to the beauty and attractiveness of the lips. In young Caucasians, the ideal ratio of the vertical height of the upper lip to that of the lower lip is 1:1.6. Blacks, genetically, have a greater lip volume. The shape and volume of a person's lips are of great importance in the perception of beauty by humans. The appearance of the lips in part determines the attractiveness of a person's face. In females, fuller lips in relation to facial width as well as greater vermilion height are considered to be attractive. Copyright © 2018 Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  5. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  6. Principal component analysis of the Norwegian version of the quality of life in late-stage dementia scale.

    PubMed

    Mjørud, Marit; Kirkevold, Marit; Røsvik, Janne; Engedal, Knut

    2014-01-01

    To investigate which factors the Quality of Life in Late-Stage Dementia (QUALID) scale holds when used among people with dementia (pwd) in nursing homes and to find out how the symptom load varies across the different severity levels of dementia. We included 661 pwd [mean age ± SD, 85.3 ± 8.6 years; 71.4% women]. The QUALID and the Clinical Dementia Rating (CDR) scale were applied. A principal component analysis (PCA) with varimax rotation and Kaiser normalization was applied to test the factor structure. Nonparametric analyses were applied to examine differences of symptom load across the three CDR groups. The mean QUALID score was 21.5 (±7.1), and the CDR scores of the three groups were 1 in 22.5%, 2 in 33.6% and 3 in 43.9%. The results of the statistical measures employed were the following: Crohnbach's α of QUALID, 0.74; Bartlett's test of sphericity, p <0.001; the Kaiser-Meyer-Olkin measure, 0.77. The PCA analysis resulted in three components accounting for 53% of the variance. The first component was 'tension' ('facial expression of discomfort', 'appears physically uncomfortable', 'verbalization suggests discomfort', 'being irritable and aggressive', 'appears calm', Crohnbach's α = 0.69), the second was 'well-being' ('smiles', 'enjoys eating', 'enjoys touching/being touched', 'enjoys social interaction', Crohnbach's α = 0.62) and the third was 'sadness' ('appears sad', 'cries', 'facial expression of discomfort', Crohnbach's α 0.65). The mean score on the components 'tension' and 'well-being' increased significantly with increasing severity levels of dementia. Three components of quality of life (qol) were identified. Qol decreased with increasing severity of dementia. © 2013 S. Karger AG, Basel.

  7. Reverse correlating trustworthy faces in young and older adults

    PubMed Central

    Éthier-Majcher, Catherine; Joubert, Sven; Gosselin, Frédéric

    2013-01-01

    Little is known about how older persons determine if someone deserves their trust or not based on their facial appearance, a process referred to as “facial trustworthiness.”In the past few years, Todorov and colleagues have argued that, in young adults, trustworthiness judgments are an extension of emotional judgments, and therefore, that trust judgments are made based on a continuum between anger and happiness (Todorov, 2008; Engell et al., 2010). Evidence from the literature on emotion processing suggest that older adults tend to be less efficient than younger adults in the recognition of negative facial expressions (Calder et al., 2003; Firestone et al., 2007; Ruffman et al., 2008; Chaby and Narme, 2009). Based on Todorov';s theory and the fact that older adults seem to be less efficient than younger adults in identifying emotional expressions, one could expect that older individuals would have different representations of trustworthy faces and that they would use different cues than younger adults in order to make such judgments. We verified this hypothesis using a variation of Mangini and Biederman's (2004) reverse correlation method in order to test and compare classification images resulting from trustworthiness (in the context of money investment), from happiness, and from anger judgments in two groups of participants: young adults and older healthy adults. Our results show that for elderly participants, both happy and angry representations are correlated with trustworthiness judgments. However, in young adults, trustworthiness judgments are mainly correlated with happiness representations. These results suggest that young and older adults differ in their way of judging trustworthiness. PMID:24046755

  8. Kinematic Features of Jaw and Lips Distinguish Symptomatic From Presymptomatic Stages of Bulbar Decline in Amyotrophic Lateral Sclerosis.

    PubMed

    Bandini, Andrea; Green, Jordan R; Wang, Jun; Campbell, Thomas F; Zinman, Lorne; Yunusova, Yana

    2018-05-17

    The goals of this study were to (a) classify speech movements of patients with amyotrophic lateral sclerosis (ALS) in presymptomatic and symptomatic phases of bulbar function decline relying solely on kinematic features of lips and jaw and (b) identify the most important measures that detect the transition between early and late bulbar changes. One hundred ninety-two recordings obtained from 64 patients with ALS were considered for the analysis. Feature selection and classification algorithms were used to analyze lip and jaw movements recorded with Optotrak Certus (Northern Digital Inc.) during a sentence task. A feature set, which included 35 measures of movement range, velocity, acceleration, jerk, and area measures of lips and jaw, was used to classify sessions according to the speaking rate into presymptomatic (> 160 words per minute) and symptomatic (< 160 words per minute) groups. Presymptomatic and symptomatic phases of bulbar decline were distinguished with high accuracy (87%), relying only on lip and jaw movements. The best features that allowed detecting the differences between early and later bulbar stages included cumulative path of lower lip and jaw, peak values of velocity, acceleration, and jerk of lower lip and jaw. The results established a relationship between facial kinematics and bulbar function decline in ALS. Considering that facial movements can be recorded by means of novel inexpensive and easy-to-use, video-based methods, this work supports the development of an automatic system for facial movement analysis to help clinicians in tracking the disease progression in ALS.

  9. Spatiotemporal dynamics of similarity-based neural representations of facial identity.

    PubMed

    Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2017-01-10

    Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.

  10. Ocular Manifestations of Oblique Facial Clefts

    PubMed Central

    Ortube, Maria Carolina; Dipple, Katrina; Setoguchi, Yoshio; Kawamoto, Henry K.; Demer, Joseph L.

    2014-01-01

    Introduction In the Tessier classification, craniofacial clefts are numbered from 0 to 14 and extend along constant axes through the eyebrows, eyelids, maxilla, nostrils, and the lips. We studied a patient with bilateral cleft 10 associated with ocular abnormalities. Method Clinical report with orbital and cranial computed tomography. Results After pregnancy complicated by oligohydramnios, digoxin, and lisinopril exposure, a boy was born with facial and ocular dysmorphism. Examination at age 26 months showed bilateral epibulbar dermoids, covering half the corneal surface, and unilateral morning glory anomaly of the optic nerve. Ductions of the right eye were normal, but the left eye had severely impaired ductions in all directions, left hypotropia, and esotropia. Under anesthesia, the left eye could not be rotated freely in any direction. Bilateral Tessier cleft number 10 was implicated by the presence of colobomata of the middle third of the upper eyelids and eyebrows. As the cleft continued into the hairline, there was marked anterior scalp alopecia. Computed x-ray tomography showed a left middle cranial fossa arachnoid cyst and calcification of the reflected tendon of the superior oblique muscle, trochlea, and underlying sclera, with downward and lateral globe displacement. Discussion Tessier 10 clefts are very rare and usually associated with encephalocele. Bilateral 10 clefts have not been reported previously. In this case, there was coexisting unilateral morning glory anomaly and arachnoid cyst of the left middle cranial fossa but no encephalocele. Conclusions Bilateral Tessier facial cleft 10 may be associated with alopecia, morning glory anomaly, epibulbar dermoids, arachnoid cyst, and restrictive strabismus. PMID:20856062

  11. Influence of Objective Three-Dimensional Measures and Movement Images on Surgeon Treatment Planning for Lip Revision Surgery

    PubMed Central

    Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.

    2013-01-01

    Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676

  12. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging

    PubMed Central

    Hsu, Vivian M.; Wes, Ari M.; Tahiri, Youssef; Cornman-Homonoff, Joshua

    2014-01-01

    Background: The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Methods: Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. Results: The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). Conclusions: This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time. PMID:25426394

  13. Emotion recognition impairment and apathy after subthalamic nucleus stimulation in Parkinson's disease have separate neural substrates.

    PubMed

    Drapier, D; Péron, J; Leray, E; Sauleau, P; Biseul, I; Drapier, S; Le Jeune, F; Travers, D; Bourguignon, A; Haegelen, C; Millet, B; Vérin, M

    2008-09-01

    To test the hypothesis that emotion recognition and apathy share the same functional circuit involving the subthalamic nucleus (STN). A consecutive series of 17 patients with advanced Parkinson's disease (PD) was assessed 3 months before (M-3) and 3 months (M+3) after STN deep brain stimulation (DBS). Mean (+/-S.D.) age at surgery was 56.9 (8.7) years. Mean disease duration at surgery was 11.8 (2.6) years. Apathy was measured using the Apathy Evaluation Scale (AES) at both M-3 and M3. Patients were also assessed using a computerised paradigm of facial emotion recognition [Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto: Consulting Psychologist Press] before and after STN DBS. Prior to this, the Benton Facial Recognition Test was used to check that the ability to perceive faces was intact. Apathy had significantly worsened at M3 (42.5+/-8.9, p=0.006) after STN-DBS, in relation to the preoperative assessment (37.2+/-5.5). There was also a significant reduction in recognition percentages for facial expressions of fear (43.1%+/-22.9 vs. 61.6%+/-21.4, p=0.022) and sadness (52.7%+/-19.1 vs. 67.6%+/-22.8, p=0.031) after STN DBS. However, the postoperative worsening of apathy and emotion recognition impairment were not correlated. Our results confirm that the STN is involved in both the apathy and emotion recognition networks. However, the absence of any correlation between apathy and emotion recognition impairment suggests that the worsening of apathy following surgery could not be explained by a lack of facial emotion recognition and that its behavioural and cognitive components should therefore also be taken into consideration.

  14. Facial growth and development in unilateral cleft lip and palate from the time of palatoplasty to the onset of puberty: a longitudinal study.

    PubMed

    Smahel, Z; Müllerová, Z

    1995-01-01

    X-ray cephalometry was used for the assessment of facial growth and development from the time of palate surgery to the onset of puberty (from 5 to 11 years) in 24 boys with unilateral cleft lip and palate treated with primary periosteoplasty (at 8 months) and palatal pushback supplemented by pharyngeal flap surgery (at 5 years). The lowest growth showed the depth of the maxilla and the height of the upper lip. An increasing protrusion of the mandible and in particular the increasing retrusion of the maxilla resulted in a flattening of the face and in an impairment of sagittal jaw relations. However, it was possible to attain an improvement of overjet produced by a substantial increase of the proclination of upper incisors and of the alveolar process. There was a deterioration of the prominence of the upper lip. Anterior growth rotation was absent during the development of the face, though a rotation in both directions was quite common in individual cases. The steepness of the mandibular body, vertical jaw relations, and facial vertical proportions remained unchanged. As compared to the pubertal period, the growth and development differed only by a more marked proclination of the dentoalveolar component of the maxilla and by an improvement of overjet. Facial convexity and sagittal jaw relations deteriorated in more than 90% of the patients, the overjet only in 20%, yet the prominence of the lip in 70%. Facial convexity and sagittal jaw relations were not correlated with mandibular rotation but they affected the overjet and the prominence of the upper lip.(ABSTRACT TRUNCATED AT 250 WORDS)

  15. Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology.

    PubMed

    Pesciarelli, Francesca; Leo, Irene; Sarlo, Michela

    2016-01-01

    The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.

  16. Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology

    PubMed Central

    Pesciarelli, Francesca; Leo, Irene; Sarlo, Michela

    2016-01-01

    The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces. PMID:26790153

  17. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  18. Time course of implicit processing and explicit processing of emotional faces and emotional words.

    PubMed

    Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred

    2011-05-01

    Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. WITHDRAWN: Resorbable versus titanium plates for facial fractures.

    PubMed

    Dorri, Mojtaba; Oliver, Richard

    2018-05-23

    Rigid internal fixation of the jaw bones is a routine procedure for the management of facial fractures. Titanium plates and screws are routinely used for this purpose. The limitations of this system has led to the development of plates manufactured from bioresorbable materials which, in some cases, omits the necessity for the second surgery. However, concerns remain about the stability of fixation and the length of time required for their degradation and the possibility of foreign body reactions. To compare the effectiveness of bioresorbable fixation systems with titanium systems for the management of facial fractures. We searched the following databases: The Cochrane Oral Health Group's Trials Register (to 20th August 2008), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2008, Issue 3), MEDLINE (1950 to 20th August 2008), EMBASE (from 1980 to 20th August 2008), http://www.clinicaltrials.gov/ and http://www.controlled-trials.com (to 20th August 2008). Randomised controlled trials comparing resorbable versus titanium fixation systems used for facial fractures. Retrieved studies were independently screened by two review authors. Results were to be expressed as random-effects models using mean differences for continuous outcomes and risk ratios for dichotomous outcomes with 95% confidence intervals. Heterogeneity was to be investigated including both clinical and methodological factors. The search strategy retrieved 53 potentially eligible studies. None of the retrieved studies met our inclusion criteria and all were excluded from this review. One study is awaiting classification as we failed to obtain the full text copy. Three ongoing trials were retrieved, two of which were stopped before recruiting the planned number of participants. In one study, the excess complications in the resorbable arm was declared as the reason for stopping the trial. This review illustrates that there are no published randomised controlled clinical trials relevant to this review question. There is currently insufficient evidence for the effectiveness of resorbable fixation systems compared with conventional titanium systems for facial fractures. The findings of this review, based on the results of the aborted trials, do not suggest that resorbable plates are as effective as titanium plates. In future, the results of ongoing clinical trials may provide high level reliable evidence for assisting clinicians and patients for decision making. Trialists should design their studies accurately and comprehensively to meet the aims and objectives defined for the study.

  20. Pairwise diversity ranking of polychotomous features for ensemble physiological signal classifiers.

    PubMed

    Gupta, Lalit; Kota, Srinivas; Molfese, Dennis L; Vaidyanathan, Ravi

    2013-06-01

    It is well known that fusion classifiers for physiological signal classification with diverse components (classifiers or data sets) outperform those with less diverse components. Determining component diversity, therefore, is of the utmost importance in the design of fusion classifiers that are often employed in clinical diagnostic and numerous other pattern recognition problems. In this article, a new pairwise diversity-based ranking strategy is introduced to select a subset of ensemble components, which when combined will be more diverse than any other component subset of the same size. The strategy is unified in the sense that the components can be classifiers or data sets. Moreover, the classifiers and data sets can be polychotomous. Classifier-fusion and data-fusion systems are formulated based on the diversity-based selection strategy, and the application of the two fusion strategies are demonstrated through the classification of multichannel event-related potentials. It is observed that for both classifier and data fusion, the classification accuracy tends to increase/decrease when the diversity of the component ensemble increases/decreases. For the four sets of 14-channel event-related potentials considered, it is shown that data fusion outperforms classifier fusion. Furthermore, it is demonstrated that the combination of data components that yield the best performance, in a relative sense, can be determined through the diversity-based selection strategy.

  1. Automated rule-base creation via CLIPS-Induce

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.

    1994-01-01

    Many CLIPS rule-bases contain one or more rule groups that perform classification. In this paper we describe CLIPS-Induce, an automated system for the creation of a CLIPS classification rule-base from a set of test cases. CLIPS-Induce consists of two components, a decision tree induction component and a CLIPS production extraction component. ID3, a popular decision tree induction algorithm, is used to induce a decision tree from the test cases. CLIPS production extraction is accomplished through a top-down traversal of the decision tree. Nodes of the tree are used to construct query rules, and branches of the tree are used to construct classification rules. The learned CLIPS productions may easily be incorporated into a large CLIPS system that perform tasks such as accessing a database or displaying information.

  2. Applying graphics user interface ot group technology classification and coding at the Boeing aerospace company

    NASA Astrophysics Data System (ADS)

    Ness, P. H.; Jacobson, H.

    1984-10-01

    The thrust of 'group technology' is toward the exploitation of similarities in component design and manufacturing process plans to achieve assembly line flow cost efficiencies for small batch production. The systematic method devised for the identification of similarities in component geometry and processing steps is a coding and classification scheme implemented by interactive CAD/CAM systems. This coding and classification scheme has led to significant increases in computer processing power, allowing rapid searches and retrievals on the basis of a 30-digit code together with user-friendly computer graphics.

  3. Electronic Nose Based on Independent Component Analysis Combined with Partial Least Squares and Artificial Neural Networks for Wine Prediction

    PubMed Central

    Aguilera, Teodoro; Lozano, Jesús; Paredes, José A.; Álvarez, Fernando J.; Suárez, José I.

    2012-01-01

    The aim of this work is to propose an alternative way for wine classification and prediction based on an electronic nose (e-nose) combined with Independent Component Analysis (ICA) as a dimensionality reduction technique, Partial Least Squares (PLS) to predict sensorial descriptors and Artificial Neural Networks (ANNs) for classification purpose. A total of 26 wines from different regions, varieties and elaboration processes have been analyzed with an e-nose and tasted by a sensory panel. Successful results have been obtained in most cases for prediction and classification. PMID:22969387

  4. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  5. Gender classification system in uncontrolled environments

    NASA Astrophysics Data System (ADS)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  6. HEAR MAPS a classification for congenital microtia/atresia based on the evaluation of 742 patients.

    PubMed

    Roberson, Joseph B; Goldsztein, Hernan; Balaker, Ashley; Schendel, Stephen A; Reinisch, John F

    2013-09-01

    Describe anatomical and radiological findings in 742 patients evaluated for congenital aural atresia and microtia by a multidisciplinary team. Develop a new classification method to enhance multidisciplinary communication regarding patients with congenital aural atresia and microtia. Retrospective chart review with descriptive analysis of findings arising from the evaluation of patients with congenital atresia and microtia between January 2008 and January 2012 at a multidisciplinary tertiary referral center. We developed a classification method based on the acronym HEAR MAPS (Hearing, Ear [microtia], Atresia grade, Remnant earlobe, Mandible development, Asymmetry of soft tissue, Paralysis of the facial nerve and Syndromes). We used this method to evaluate 742 consecutive congenital atresia and microtia patients between 2008 and January of 2012. Grade 3 microtia was the most common external ear malformation (76%). Pre-operative Jahrsdoerfer scale was 9 (19%), 8 (39%), 7 (19%), and 6 or less (22%). Twenty three percent of patients had varying degrees of hypoplasia of the mandible. Less than 10% of patients had an identified associated syndrome. Patients with congenital aural atresia and microtia often require the intervention of audiology, otology, plastic surgery, craniofacial surgery and speech and language professionals to achieve optimal functional and esthetic reconstruction. Good communication between these disciplines is essential for coordination of care. We describe our use of a new classification method that efficiently describes the physical and radiologic findings in microtia/atresia patients to improve communication amongst care providers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. 78 FR 57870 - Agency Information Collection Activities: Registration for Classification as Refugee; Revision of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-20

    ... DEPARTMENT OF HOMELAND SECURITY U.S. Citizenship and Immigration Services [OMB Control Number 1615-0068; Form I-590] Agency Information Collection Activities: Registration for Classification as Refugee... Classification as Refuge. (3) Agency form number, if any, and the applicable component of the DHS sponsoring the...

  8. Comparison of fingerprint and facial biometric verification technologies for user access and patient identification in a clinical environment

    NASA Astrophysics Data System (ADS)

    Guo, Bing; Zhang, Yu; Documet, Jorge; Liu, Brent; Lee, Jasper; Shrestha, Rasu; Wang, Kevin; Huang, H. K.

    2007-03-01

    As clinical imaging and informatics systems continue to integrate the healthcare enterprise, the need to prevent patient mis-identification and unauthorized access to clinical data becomes more apparent especially under the Health Insurance Portability and Accountability Act (HIPAA) mandate. Last year, we presented a system to track and verify patients and staff within a clinical environment. This year, we further address the biometric verification component in order to determine which Biometric system is the optimal solution for given applications in the complex clinical environment. We install two biometric identification systems including fingerprint and facial recognition systems at an outpatient imaging facility, Healthcare Consultation Center II (HCCII). We evaluated each solution and documented the advantages and pitfalls of each biometric technology in this clinical environment.

  9. A system for automatic artifact removal in ictal scalp EEG based on independent component analysis and Bayesian classification.

    PubMed

    LeVan, P; Urrestarazu, E; Gotman, J

    2006-04-01

    To devise an automated system to remove artifacts from ictal scalp EEG, using independent component analysis (ICA). A Bayesian classifier was used to determine the probability that 2s epochs of seizure segments decomposed by ICA represented EEG activity, as opposed to artifact. The classifier was trained using numerous statistical, spectral, and spatial features. The system's performance was then assessed using separate validation data. The classifier identified epochs representing EEG activity in the validation dataset with a sensitivity of 82.4% and a specificity of 83.3%. An ICA component was considered to represent EEG activity if the sum of the probabilities that its epochs represented EEG exceeded a threshold predetermined using the training data. Otherwise, the component represented artifact. Using this threshold on the validation set, the identification of EEG components was performed with a sensitivity of 87.6% and a specificity of 70.2%. Most misclassified components were a mixture of EEG and artifactual activity. The automated system successfully rejected a good proportion of artifactual components extracted by ICA, while preserving almost all EEG components. The misclassification rate was comparable to the variability observed in human classification. Current ICA methods of artifact removal require a tedious visual classification of the components. The proposed system automates this process and removes simultaneously multiple types of artifacts.

  10. Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.

    PubMed

    Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem

    2006-06-01

    Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.

  11. Classification of fMRI independent components using IC-fingerprints and support vector machine classifiers.

    PubMed

    De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia

    2007-01-01

    We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.

  12. Bell's Palsy.

    PubMed

    Reich, Stephen G

    2017-04-01

    Bell's palsy is a common outpatient problem, and while the diagnosis is usually straightforward, a number of diagnostic pitfalls can occur, and a lengthy differential diagnosis exists. Recognition and management of Bell's palsy relies on knowledge of the anatomy and function of the various motor and nonmotor components of the facial nerve. Avoiding diagnostic pitfalls relies on recognizing red flags or features atypical for Bell's palsy, suggesting an alternative cause of peripheral facial palsy. The first American Academy of Neurology (AAN) evidence-based review on the treatment of Bell's palsy in 2001 concluded that corticosteroids were probably effective and that the antiviral acyclovir was possibly effective in increasing the likelihood of a complete recovery from Bell's palsy. Subsequent studies led to a revision of these recommendations in the 2012 evidence-based review, concluding that corticosteroids, when used shortly after the onset of Bell's palsy, were "highly likely" to increase the probability of recovery of facial weakness and should be offered; the addition of an antiviral to steroids may increase the likelihood of recovery but, if so, only by a very modest effect. Bell's palsy is characterized by the spontaneous acute onset of unilateral peripheral facial paresis or palsy in isolation, meaning that no features from the history, neurologic examination, or head and neck examination suggest a specific or alternative cause. In this setting, no further testing is necessary. Even without treatment, the outcome of Bell's palsy is favorable, but treatment with corticosteroids significantly increases the likelihood of improvement.

  13. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    PubMed Central

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  14. A Novel Classification System for Injuries After Electronic Cigarette Explosions.

    PubMed

    Patterson, Scott B; Beckett, Allison R; Lintner, Alicia; Leahey, Carly; Greer, Ashley; Brevard, Sidney B; Simmons, Jon D; Kahn, Steven A

    Electronic cigarettes (e-cigarettes) contain lithium batteries that have been known to explode and/or cause fires that have resulted in burn injury. The purpose of this article is to present a case study, review injuries caused by e-cigarettes, and present a novel classification system from the newly emerging patterns of burns. A case study was presented and online media reports for e-cigarette burns were queried with search terms "e-cigarette burns" and "electronic cigarette burns." The reports and injury patterns were tabulated. Analysis was then performed to create a novel classification system based on the distinct injury patterns seen in the study. Two patients were seen at our regional burn center after e-cigarette burns. One had an injury to his thigh and penis that required operative intervention after ignition of this device in his pocket. The second had a facial burn and corneal abrasions when the device exploded while he was inhaling vapor. The Internet search and case studies resulted in 26 cases for evaluation. The burn patterns were divided in direct injury from the device igniting and indirect injury when the device caused a house or car fire. A numerical classification was created: direct injury: type 1 (hand injury) 7 cases, type 2 (face injury) 8 cases, type 3 (waist/groin injury) 11 cases, and type 5a (inhalation injury from using device) 2 cases; indirect injury: type 4 (house fire injury) 7 cases and type 5b (inhalation injury from fire started by the device) 4 cases. Multiple e-cigarette injuries are occurring in the United States and distinct patterns of burns are emerging. The classification system developed in this article will aid in further study and future regulation of these dangerous devices.

  15. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  16. Towards Cooperative Predictive Data Mining in Competitive Environments

    NASA Astrophysics Data System (ADS)

    Lisý, Viliam; Jakob, Michal; Benda, Petr; Urban, Štěpán; Pěchouček, Michal

    We study the problem of predictive data mining in a competitive multi-agent setting, in which each agent is assumed to have some partial knowledge required for correctly classifying a set of unlabelled examples. The agents are self-interested and therefore need to reason about the trade-offs between increasing their classification accuracy by collaborating with other agents and disclosing their private classification knowledge to other agents through such collaboration. We analyze the problem and propose a set of components which can enable cooperation in this otherwise competitive task. These components include measures for quantifying private knowledge disclosure, data-mining models suitable for multi-agent predictive data mining, and a set of strategies by which agents can improve their classification accuracy through collaboration. The overall framework and its individual components are validated on a synthetic experimental domain.

  17. Significance of perceptually relevant image decolorization for scene classification

    NASA Astrophysics Data System (ADS)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  18. Reduction of facial wrinkles by hydrolyzed water-soluble egg membrane associated with reduction of free radical stress and support of matrix production by dermal fibroblasts

    PubMed Central

    Jensen, Gitte S; Shah, Bijal; Holtz, Robert; Patel, Ashok; Lo, Donald C

    2016-01-01

    Objective The aim of this study was to evaluate the effects of water-soluble egg membrane (WSEM) on wrinkle reduction in a clinical pilot study and to elucidate specific mechanisms of action using primary human immune and dermal cell-based bioassays. Methods To evaluate the effects of topical application of WSEM (8%) on human skin, an open-label 8-week study was performed involving 20 healthy females between the age of 45 years and 65 years. High-resolution photography and digital analysis were used to evaluate the wrinkle depth in the facial skin areas beside the eye (crow’s feet). WSEM was tested for total antioxidant capacity and effects on the formation of reactive oxygen species by human polymorphonuclear cells. Human keratinocytes (HaCaT cells) were used for quantitative polymerase chain reaction analysis of the antioxidant response element genes Nqo1, Gclm, Gclc, and Hmox1. Evaluation of effects on human primary dermal fibroblasts in vitro included cellular viability and production of the matrix components collagen and elastin. Results Topical use of a WSEM-containing facial cream for 8 weeks resulted in a significant reduction of wrinkle depth (P<0.05). WSEM contained antioxidants and reduced the formation of reactive oxygen species by inflammatory cells in vitro. Despite lack of a quantifiable effect on Nrf2, WSEM induced the gene expression of downstream Nqo1, Gclm, Gclc, and Hmox1 in human keratinocytes. Human dermal fibroblasts treated with WSEM produced more collagen and elastin than untreated cells or cells treated with dbcAMP control. The increase in collagen production was statistically significant (P<0.05). Conclusion The topical use of WSEM on facial skin significantly reduced the wrinkle depth. The underlying mechanisms of this effect may be related to protection from free radical damage at the cellular level and induction of several antioxidant response elements, combined with stimulation of human dermal fibroblasts to secrete high levels of matrix components. PMID:27789968

  19. In vitro evaluation of the marginal integrity of CAD/CAM interim crowns.

    PubMed

    Kelvin Khng, Kwang Yong; Ettinger, Ronald L; Armstrong, Steven R; Lindquist, Terry; Gratton, David G; Qian, Fang

    2016-05-01

    The accuracy of interim crowns made with computer-aided design and computer-aided manufacturing (CAD/CAM) systems has not been well investigated. The purpose of this in vitro study was to evaluate the marginal integrity of interim crowns made by CAD/CAM compared with that of conventional polymethylmethacrylate (PMMA) crowns. A dentoform mandibular left second premolar was prepared for a ceramic crown and scanned for the fabrication of 60 stereolithical resin dies, half of which were scanned to fabricate 15 Telio CAD-CEREC and 15 Paradigm MZ100-E4D-E4D crowns. Fifteen Caulk and 15 Jet interim crowns were made on the remaining resin dies. All crowns were cemented with Tempgrip under a 17.8-N load, thermocycled for 1000 cycles, placed in 0.5% acid fuschin for 24 hours, and embedded in epoxy resin before sectioning from the mid-buccal to mid-lingual surface. The marginal discrepancy was measured using a traveling microscope, and dye penetration was measured as a percentage of the overall length under the crown. The mean vertical marginal discrepancy of the conventionally made interim crowns was greater than for the CAD/CAM crowns (P=.006), while no difference was found for the horizontal component (P=.276). The mean vertical marginal discrepancy at the facial surface of the Caulk crowns was significantly greater than that of the other 3 types of interim crowns (P<.001). At the facial margin, the mean horizontal component of the Telio crowns was significantly larger than that of the other 3 types, with no difference at the lingual margins (P=.150). The mean percentage dye penetration for the Paradigm MZ100-E4D crowns was significantly greater and for Jet crowns significantly smaller than for the other 3 crowns (P<.001). However, the mean percentage dye penetration was significantly correlated with the vertical and horizontal marginal discrepancies of the Jet interim crowns at the facial surface and with the horizontal marginal discrepancies of the Caulk interim crowns at the lingual surface (P<.01 in each instance). A significantly smaller vertical marginal discrepancy was found with the interim crowns fabricated by CAD/CAM as compared with PMMA crowns; however, this difference was not observed for the horizontal component. The percentage dye penetration was correlated with vertical and horizontal discrepancies at the facial surface for the Jet interim crowns and with horizontal discrepancies at the lingual surface for the Caulk interim crowns. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  20. Superpixel-based spectral classification for the detection of head and neck cancer with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Chung, Hyunkoo; Lu, Guolan; Tian, Zhiqiang; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications. HSI acquires two dimensional images at various wavelengths. The combination of both spectral and spatial information provides quantitative information for cancer detection and diagnosis. This paper proposes using superpixels, principal component analysis (PCA), and support vector machine (SVM) to distinguish regions of tumor from healthy tissue. The classification method uses 2 principal components decomposed from hyperspectral images and obtains an average sensitivity of 93% and an average specificity of 85% for 11 mice. The hyperspectral imaging technology and classification method can have various applications in cancer research and management.

Top