Sample records for facial component classification

  1. The Role of Facial Attractiveness and Facial Masculinity/Femininity in Sex Classification of Faces

    PubMed Central

    Hoss, Rebecca A.; Ramsey, Jennifer L.; Griffin, Angela M.; Langlois, Judith H.

    2005-01-01

    We tested whether adults (Experiment 1) and 4–5-year-old children (Experiment 2) identify the sex of high attractive faces faster and more accurately than low attractive faces in a reaction time task. We also assessed whether facial masculinity/femininity facilitated identification of sex. Results showed that attractiveness facilitated adults’ sex classification of both female and male faces and children’s sex classification of female, but not male, faces. Moreover, attractiveness affected the speed and accuracy of sex classification independent of masculinity/femininity. High masculinity in male faces, but not high femininity in female faces, also facilitated sex classification for both adults and children. These findings provide important new data on how the facial cues of attractiveness and masculinity/femininity contribute to the task of sex classification and provide evidence for developmental differences in how adults and children use these cues. Additionally, these findings provide support for Langlois and Roggman’s (1990) averageness theory of attractiveness. PMID:16457167

  2. Facial clefts and facial dysplasia: revisiting the classification.

    PubMed

    Mazzola, Riccardo F; Mazzola, Isabella C

    2014-01-01

    Most craniofacial malformations are identified by their appearance. The majority of the classification systems are mainly clinical or anatomical, not related to the different levels of development of the malformation, and underlying pathology is usually not taken into consideration. In 1976, Tessier first emphasized the relationship between soft tissues and the underlying bone stating that "a fissure of the soft tissue corresponds, as a general rule, with a cleft of the bony structure". He introduced a cleft numbering system around the orbit from 0 to 14 depending on its relationship to the zero line (ie, the vertical midline cleft of the face). The classification, easy to understand, became widely accepted because the recording of the malformations was simple and communication between observers facilitated. It represented a great breakthrough in identifying craniofacial malformations, named clefts by him. In the present paper, the embryological-based classification of craniofacial malformations, proposed in 1983 and in 1990 by us, has been revisited. Its aim was to clarify some unanswered questions regarding apparently atypical or bizarre anomalies and to establish as much as possible the moment when this event occurred. In our opinion, this classification system may well integrate the one proposed by Tessier and tries at the same time to find a correlation between clinical observation and morphogenesis.Terminology is important. The overused term cleft should be reserved to true clefts only, developed from disturbances in the union of the embryonic facial processes, between the lateronasal and maxillary process (or oro-naso-ocular cleft); between the medionasal and maxillary process (or cleft of the lip); between the maxillary processes (or cleft of the palate); and between the maxillary and mandibular process (or macrostomia).For the other types of defects, derived from alteration of bone production centers, the word dysplasia should be used instead. Facial

  3. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  4. Morphologic evaluation and classification of facial asymmetry using 3-dimensional computed tomography.

    PubMed

    Baek, Chaehwan; Paeng, Jun-Young; Lee, Janice S; Hong, Jongrak

    2012-05-01

    A systematic classification is needed for the diagnosis and surgical treatment of facial asymmetry. The purposes of this study were to analyze the skeletal structures of patients with facial asymmetry and to objectively classify these patients into groups according to these structural characteristics. Patients with facial asymmetry and recent computed tomographic images from 2005 through 2009 were included in this study, which was approved by the institutional review board. Linear measurements, angles, and reference planes on 3-dimensional computed tomograms were obtained, including maxillary (upper midline deviation, maxilla canting, and arch form discrepancy) and mandibular (menton deviation, gonion to midsagittal plane, ramus height, and frontal ramus inclination) measurements. All measurements were analyzed using paired t tests with Bonferroni correction followed by K-means cluster analysis using SPSS 13.0 to determine an objective classification of facial asymmetry in the enrolled patients. Kruskal-Wallis test was performed to verify differences among clustered groups. P < .05 was considered statistically significant. Forty-three patients (18 male, 25 female) were included in the study. They were classified into 4 groups based on cluster analysis. Their mean age was 24.3 ± 4.4 years. Group 1 included subjects (44% of patients) with asymmetry caused by a shift or lateralization of the mandibular body. Group 2 included subjects (39%) with a significant difference between the left and right ramus height with menton deviation to the short side. Group 3 included subjects (12%) with atypical asymmetry, including deviation of the menton to the short side, prominence of the angle/gonion on the larger side, and reverse maxillary canting. Group 4 included subjects (5%) with severe maxillary canting, ramus height differences, and menton deviation to the short side. In this study, patients with asymmetry were classified into 4 statistically distinct groups according to

  5. Targeting specific facial variation for different identification tasks.

    PubMed

    Aeria, Gillian; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    A conceptual framework that allows faces to be studied and compared objectively with biological validity is presented. The framework is a logical extension of modern morphometrics and statistical shape analysis techniques. Three dimensional (3D) facial scans were collected from 255 healthy young adults. One scan depicted a smiling facial expression and another scan depicted a neutral expression. These facial scans were modelled in a Principal Component Analysis (PCA) space where Euclidean (ED) and Mahalanobis (MD) distances were used to form similarity measures. Within this PCA space, property pathways were calculated that expressed the direction of change in facial expression. Decomposition of distances into property-independent (D1) and dependent components (D2) along these pathways enabled the comparison of two faces in terms of the extent of a smiling expression. The performance of all distances was tested and compared in dual types of experiments: Classification tasks and a Recognition task. In the Classification tasks, individual facial scans were assigned to one or more population groups of smiling or neutral scans. The property-dependent (D2) component of both Euclidean and Mahalanobis distances performed best in the Classification task, by correctly assigning 99.8% of scans to the right population group. The recognition task tested if a scan of an individual depicting a smiling/neutral expression could be positively identified when shown a scan of the same person depicting a neutral/smiling expression. ED1 and MD1 performed best, and correctly identified 97.8% and 94.8% of individual scans respectively as belonging to the same person despite differences in facial expression. It was concluded that decomposed components are superior to straightforward distances in achieving positive identifications and presents a novel method for quantifying facial similarity. Additionally, although the undecomposed Mahalanobis distance often used in practice outperformed

  6. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    PubMed

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  7. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity

    PubMed Central

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882

  8. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  9. The face-selective N170 component is modulated by facial color.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki

    2012-08-01

    Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach

    PubMed Central

    Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi

    2017-01-01

    While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979

  11. Identification and Classification of Facial Familiarity in Directed Lying: An ERP Study

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Lee, Tatia M. C.

    2012-01-01

    Recognizing familiar faces is essential to social functioning, but little is known about how people identify human faces and classify them in terms of familiarity. Face identification involves discriminating familiar faces from unfamiliar faces, whereas face classification involves making an intentional decision to classify faces as “familiar” or “unfamiliar.” This study used a directed-lying task to explore the differentiation between identification and classification processes involved in the recognition of familiar faces. To explore this issue, the participants in this study were shown familiar and unfamiliar faces. They responded to these faces (i.e., as familiar or unfamiliar) in accordance with the instructions they were given (i.e., to lie or to tell the truth) while their EEG activity was recorded. Familiar faces (regardless of lying vs. truth) elicited significantly less negative-going N400f in the middle and right parietal and temporal regions than unfamiliar faces. Regardless of their actual familiarity, the faces that the participants classified as “familiar” elicited more negative-going N400f in the central and right temporal regions than those classified as “unfamiliar.” The P600 was related primarily with the facial identification process. Familiar faces (regardless of lying vs. truth) elicited more positive-going P600f in the middle parietal and middle occipital regions. The results suggest that N400f and P600f play different roles in the processes involved in facial recognition. The N400f appears to be associated with both the identification (judgment of familiarity) and classification of faces, while it is likely that the P600f is only associated with the identification process (recollection of facial information). Future studies should use different experimental paradigms to validate the generalizability of the results of this study. PMID:22363597

  12. Extreme Facial Expressions Classification Based on Reality Parameters

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  13. Computer Recognition of Facial Profiles

    DTIC Science & Technology

    1974-08-01

    facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class

  14. The effects of facial color and inversion on the N170 event-related potential (ERP) component.

    PubMed

    Minami, T; Nakajima, K; Changvisommid, L; Nakauchi, S

    2015-12-17

    Faces are important for social interaction because much can be perceived from facial details, including a person's race, age, and mood. Recent studies have shown that both configural (e.g. face shape and inversion) and surface information (e.g. surface color and reflectance properties) are important for face perception. Therefore, the present study examined the effects of facial color and inverted face properties on event-related potential (ERP) responses, particularly the N170 component. Stimuli consisted of natural and bluish-colored faces. Faces were presented in both upright and upside down orientations. An ANOVA was used to analyze N170 amplitudes and verify the effects of the main independent variables. Analysis of N170 amplitude revealed the significant interactions between stimulus orientation and color. Subsequent analysis indicated that N170 was larger for bluish-colored faces than natural-colored faces, and N170 to natural-colored faces was larger in response to inverted stimulus as compared to upright stimulus. Additionally, a multivariate pattern analysis (MVPA) investigated face-processing dynamics without any prior assumptions. Results distinguished, above chance, both facial color and orientation from single-trial electroencephalogram (EEG) signals. Decoding performance for color classification of inverted faces was significantly diminished as compared to an upright orientation. This suggests that processing orientation is predominant over facial color. Taken together, the present findings elucidate the temporal and spatial distribution of orientation and color processing during face processing. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  16. Coherence explored between emotion components: evidence from event-related potentials and facial electromyography.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2014-04-01

    Componential theories assume that emotion episodes consist of emergent and dynamic response changes to relevant events in different components, such as appraisal, physiology, motivation, expression, and subjective feeling. In particular, Scherer's Component Process Model hypothesizes that subjective feeling emerges when the synchronization (or coherence) of appraisal-driven changes between emotion components has reached a critical threshold. We examined the prerequisite of this synchronization hypothesis for appraisal-driven response changes in facial expression. The appraisal process was manipulated by using feedback stimuli, presented in a gambling task. Participants' responses to the feedback were investigated in concurrently recorded brain activity related to appraisal (event-related potentials, ERP) and facial muscle activity (electromyography, EMG). Using principal component analysis, the prediction of appraisal-driven response changes in facial EMG was examined. Results support this prediction: early cognitive processes (related to the feedback-related negativity) seem to primarily affect the upper face, whereas processes that modulate P300 amplitudes tend to predominantly drive cheek region responses. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  18. How components of facial width to height ratio differently contribute to the perception of social traits

    PubMed Central

    Lio, Guillaume; Gomez, Alice; Sirigu, Angela

    2017-01-01

    Facial width to height ratio (fWHR) is a morphological cue that correlates with sexual dimorphism and social traits. Currently, it is unclear how vertical and horizontal components of fWHR, distinctly capture faces’ social information. Using a new methodology, we orthogonally manipulated the upper facial height and the bizygomatic width to test their selective effect in the formation of impressions. Subjects (n = 90) saw pair of faces and had to select the face expressing better different social traits (trustworthiness, aggressiveness and femininity). We further investigated how sex and fWHR components interact in the formation of these judgements. Across experiments, changes along the vertical component better predicted participants' ratings rather than the horizontal component. Faces with smaller height were perceived as less trustworthy, less feminine and more aggressive. By dissociating fWHR and testing the contribution of its components independently, we obtained a powerful and discriminative measure of how facial morphology guides social judgements. PMID:28235081

  19. Automatic sleep stage classification using two facial electrodes.

    PubMed

    Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel

    2008-01-01

    Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.

  20. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  1. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  2. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  3. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    PubMed

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Emotional facial activation induced by unconsciously perceived dynamic facial expressions.

    PubMed

    Kaiser, Jakob; Davey, Graham C L; Parkhouse, Thomas; Meeres, Jennifer; Scott, Ryan B

    2016-12-01

    Do facial expressions of emotion influence us when not consciously perceived? Methods to investigate this question have typically relied on brief presentation of static images. In contrast, real facial expressions are dynamic and unfold over several seconds. Recent studies demonstrate that gaze contingent crowding (GCC) can block awareness of dynamic expressions while still inducing behavioural priming effects. The current experiment tested for the first time whether dynamic facial expressions presented using this method can induce unconscious facial activation. Videos of dynamic happy and angry expressions were presented outside participants' conscious awareness while EMG measurements captured activation of the zygomaticus major (active when smiling) and the corrugator supercilii (active when frowning). Forced-choice classification of expressions confirmed they were not consciously perceived, while EMG revealed significant differential activation of facial muscles consistent with the expressions presented. This successful demonstration opens new avenues for research examining the unconscious emotional influences of facial expressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  6. Single trial classification for the categories of perceived emotional facial expressions: an event-related fMRI study

    NASA Astrophysics Data System (ADS)

    Song, Sutao; Huang, Yuxia; Long, Zhiying; Zhang, Jiacai; Chen, Gongxiang; Wang, Shuqing

    2016-03-01

    Recently, several studies have successfully applied multivariate pattern analysis methods to predict the categories of emotions. These studies are mainly focused on self-experienced emotions, such as the emotional states elicited by music or movie. In fact, most of our social interactions involve perception of emotional information from the expressions of other people, and it is an important basic skill for humans to recognize the emotional facial expressions of other people in a short time. In this study, we aimed to determine the discriminability of perceived emotional facial expressions. In a rapid event-related fMRI design, subjects were instructed to classify four categories of facial expressions (happy, disgust, angry and neutral) by pressing different buttons, and each facial expression stimulus lasted for 2s. All participants performed 5 fMRI runs. One multivariate pattern analysis method, support vector machine was trained to predict the categories of facial expressions. For feature selection, ninety masks defined from anatomical automatic labeling (AAL) atlas were firstly generated and each were treated as the input of the classifier; then, the most stable AAL areas were selected according to prediction accuracies, and comprised the final feature sets. Results showed that: for the 6 pair-wise classification conditions, the accuracy, sensitivity and specificity were all above chance prediction, among which, happy vs. neutral , angry vs. disgust achieved the lowest results. These results suggested that specific neural signatures of perceived emotional facial expressions may exist, and happy vs. neutral, angry vs. disgust might be more similar in information representation in the brain.

  7. A Systematic Classification for HVAC Systems and Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Han; Chen, Yan; Zhang, Jian

    Depending on the application, the complexity of an HVAC system can range from a small fan coil unit to a large centralized air conditioning system with primary and secondary distribution loops, and central plant components. Currently, the taxonomy of HVAC systems and the components has various aspects, which can get quite complex because of the various components and system configurations. For example, based on cooling and heating medium delivered to terminal units, systems can be classified as either air systems, water systems or air-water systems. In addition, some of the system names might be commonly used in a confusing manner,more » such as “unitary system” vs. “packaged system.” Without a systematic classification, these components and system terminology can be confusing to understand or differentiate from each other, and it creates ambiguity in communication, interpretation, and documentation. It is valuable to organize and classify HVAC systems and components so that they can be easily understood and used in a consistent manner. This paper aims to develop a systematic classification of HVAC systems and components. First, we summarize the HVAC component information and definitions based on published literature, such as ASHRAE handbooks, regulations, and rating standards. Then, we identify common HVAC system types and map them to the collected components in a meaningful way. Classification charts are generated and described based on the component information. Six main categories are identified for the HVAC components and equipment, i.e., heating and cooling production, heat extraction and rejection, air handling process, distribution system, terminal use, and stand-alone system. Components for each main category are further analyzed and classified in detail. More than fifty system names are identified and grouped based on their characteristics. The result from this paper will be helpful for education, communication, and systems and component

  8. The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression.

    PubMed

    Holmes, Amanda; Winston, Joel S; Eimer, Martin

    2005-10-01

    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.

  9. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    PubMed

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  10. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance

    PubMed Central

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18–25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects’ evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. In conclusion: 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population. PMID:26562655

  11. Classification of Computer-Aided Design-Computer-Aided Manufacturing Applications for the Reconstruction of Cranio-Maxillo-Facial Defects.

    PubMed

    Wauters, Lauri D J; Miguel-Moragas, Joan San; Mommaerts, Maurice Y

    2015-11-01

    To gain insight into the methodology of different computer-aided design-computer-aided manufacturing (CAD-CAM) applications for the reconstruction of cranio-maxillo-facial (CMF) defects. We reviewed and analyzed the available literature pertaining to CAD-CAM for use in CMF reconstruction. We proposed a classification system of the techniques of implant and cutting, drilling, and/or guiding template design and manufacturing. The system consisted of 4 classes (I-IV). These classes combine techniques used for both the implant and template to most accurately describe the methodology used. Our classification system can be widely applied. It should facilitate communication and immediate understanding of the methodology of CAD-CAM applications for the reconstruction of CMF defects.

  12. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  13. Physical therapy for facial paralysis: a tailored treatment approach.

    PubMed

    Brach, J S; VanSwearingen, J M

    1999-04-01

    Bell palsy is an acute facial paralysis of unknown etiology. Although recovery from Bell palsy is expected without intervention, clinical experience suggests that recovery is often incomplete. This case report describes a classification system used to guide treatment and to monitor recovery of an individual with facial paralysis. The patient was a 71-year-old woman with complete left facial paralysis secondary to Bell palsy. Signs and symptoms were assessed using a standardized measure of facial impairment (Facial Grading System [FGS]) and questions regarding functional limitations. A treatment-based category was assigned based on signs and symptoms. Rehabilitation involved muscle re-education exercises tailored to the treatment-based category. In 14 physical therapy sessions over 13 months, the patient had improved facial impairments (initial FGS score= 17/100, final FGS score= 68/100) and no reported functional limitations. Recovery from Bell palsy can be a complicated and lengthy process. The use of a classification system may help simplify the rehabilitation process.

  14. Classification of independent components of EEG into multiple artifact classes.

    PubMed

    Frølich, Laura; Andersen, Tobias S; Mørup, Morten

    2015-01-01

    In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.

  15. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  16. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  17. The fallopian canal: a comprehensive review and proposal of a new classification.

    PubMed

    Mortazavi, M M; Latif, B; Verma, K; Adeeb, N; Deep, A; Griessenauer, C J; Tubbs, R S; Fukushima, T

    2014-03-01

    The facial nerve follows a complex course through the skull base. Understanding its anatomy is crucial during standard skull base approaches and resection of certain skull base tumors closely related to the nerve, especially, tumors at the cerebellopontine angle. Herein, we review the fallopian canal and its implications in surgical approaches to the skull base. Furthermore, we suggest a new classification. Based on the anatomy and literature, we propose that the meatal segment of the facial nerve be included as a component of the fallopian canal. A comprehensive knowledge of the course of the facial nerve is important to those who treat patients with pathology of or near this cranial nerve.

  18. Cysts of the oro-facial region: A Nigerian experience

    PubMed Central

    Lawal, AO; Adisa, AO; Sigbeku, OF

    2012-01-01

    Aim: Though many studies have examined cysts of the jaws, most of them focused on a group of cysts and only few have examined cysts based on a particular classification. The aim of this study is to review cysts of the oro-facial region seen at a tertiary health centre in Ibadan and to categorize these cases based on Lucas, Killey and Kay and WHO classifications. Materials and Methods: All histologically diagnosed oro-facial cysts were retrieved from the oral pathology archives. Information concerning cyst type, topography, age at time of diagnosis and gender of patients was gathered. Data obtained was analyzed with the SPSS 18.0.1 version software. Results: A total of 92 histologically diagnosed oro-facial cysts comprising 60 (65.2%) males and 32 (34.8%) females were seen. The age range was 4 to 73 years with a mean age of 27.99 ± 15.26 years. The peak incidence was in the third decade. The mandible/ maxilla ratio was 1.5:1. Apical periodontal was the most common type of cyst accounting for 50% (n = 46) of total cysts observed. Using the WHO classification, cysts of the soft tissues of head, face and neck were overwhelmingly more common in males than females with a ratio of 14:3, while non-epithelial cysts occurred at a 3:1 male/female ratio. Conclusion: This study showed similar findings in regard to type, site and age incidence of oro-facial cysts compared to previous studies and also showed that the WHO classification protocol was the most comprehensive classification method for oro-facial cysts. PMID:22923885

  19. Hereditary family signature of facial expression

    PubMed Central

    Peleg, Gili; Katzir, Gadi; Peleg, Ofer; Kamara, Michal; Brodsky, Leonid; Hel-Or, Hagit; Keren, Daniel; Nevo, Eviatar

    2006-01-01

    Although facial expressions of emotion are universal, individual differences create a facial expression “signature” for each person; but, is there a unique family facial expression signature? Only a few family studies on the heredity of facial expressions have been performed, none of which compared the gestalt of movements in various emotional states; they compared only a few movements in one or two emotional states. No studies, to our knowledge, have compared movements of congenitally blind subjects with their relatives to our knowledge. Using two types of analyses, we show a correlation between movements of congenitally blind subjects with those of their relatives in think-concentrate, sadness, anger, disgust, joy, and surprise and provide evidence for a unique family facial expression signature. In the analysis “in-out family test,” a particular movement was compared each time across subjects. Results show that the frequency of occurrence of a movement of a congenitally blind subject in his family is significantly higher than that outside of his family in think-concentrate, sadness, and anger. In the analysis “the classification test,” in which congenitally blind subjects were classified to their families according to the gestalt of movements, results show 80% correct classification over the entire interview and 75% in anger. Analysis of the movements' frequencies in anger revealed a correlation between the movements' frequencies of congenitally blind individuals and those of their relatives. This study anticipates discovering genes that influence facial expressions, understanding their evolutionary significance, and elucidating repair mechanisms for syndromes lacking facial expression, such as autism. PMID:17043232

  20. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    NASA Astrophysics Data System (ADS)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  1. Complications in Pediatric Facial Fractures

    PubMed Central

    Chao, Mimi T.; Losee, Joseph E.

    2009-01-01

    Despite recent advances in the diagnosis, treatment, and prevention of pediatric facial fractures, little has been published on the complications of these fractures. The existing literature is highly variable regarding both the definition and the reporting of adverse events. Although the incidence of pediatric facial fractures is relative low, they are strongly associated with other serious injuries. Both the fractures and their treatment may have long-term consequence on growth and development of the immature face. This article is a selective review of the literature on facial fracture complications with special emphasis on the complications unique to pediatric patients. We also present our classification system to evaluate adverse outcomes associated with pediatric facial fractures. Prospective, long-term studies are needed to fully understand and appreciate the complexity of treating children with facial fractures and determining the true incidence, subsequent growth, and nature of their complications. PMID:22110803

  2. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  3. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  4. Postparalysis Facial Synkinesis: Clinical Classification and Surgical Strategies

    PubMed Central

    Chang, Tommy Nai-Jen; Lu, Johnny Chuieng-Yi

    2015-01-01

    Background: Postparalysis facial synkinesis (PPFS) can occur after any cause of facial palsy. Current treatments are still inadequate. Surgical intervention, instead of Botox and rehabilitation only, for different degrees of PPFS was proposed. Methods: Seventy patients (43 females and 27 males) with PPFS were enrolled since 1986. They were divided into 4 patterns based on quality of smile and severity of synkinesis. Data collection for clinically various presentations was made: pattern I (n = 14) with good smile but synkinesis, pattern II (n = 17) with acceptable smile but dominant synkinesis, pattern III (n = 34) unacceptable smile and dominant synkinesis, and pattern IV (n = 5) poor smile and synkinesis. Surgical interventions were based on patterns of PPFS. Selective myectomy and some cosmetic procedures were performed for pattern I and II patients. Extensive myectomy and neurectomy of the involved muscles and nerves followed by functioning free-muscle transplantation for facial reanimation in 1- or 2-stage procedure were performed for pattern III and many pattern II patients. A classic 2-stage procedure for facial reanimation was performed for pattern IV patients. Results: Minor aesthetic procedures provided some help to pattern I patients but did not cure the problem. They all had short follow-up. Most patients in patterns II (14/17, 82%) and III (34/34, 100%) showed a significant improvement of eye and smile appearance and significant decrease in synkinetic movements following the aggressively major surgical intervention. Nearly, all of the patients treated by the authors did not need repeated botulinum toxin A injection nor require a profound rehabilitation program in the follow-up period. Conclusions: Treatment of PPFS remains a challenging problem. Major surgical reconstruction showed more promising and long-lasting results than botulinum toxin A and/or rehabilitation on pattern III and II patients. PMID:25878931

  5. Branches of the Facial Artery.

    PubMed

    Hwang, Kun; Lee, Geun In; Park, Hye Jin

    2015-06-01

    The aim of this study is to review the name of the branches, to review the classification of the branching pattern, and to clarify a presence percentage of each branch of the facial artery, systematically. In a PubMed search, the search terms "facial," AND "artery," AND "classification OR variant OR pattern" were used. The IBM SPSS Statistics 20 system was used for statistical analysis. Among the 500 titles, 18 articles were selected and reviewed systematically. Most of the articles focused on "classification" according to the "terminal branch." Several authors classified the facial artery according to their terminal branches. Most of them, however, did not describe the definition of "terminal branch." There were confusions within the classifications. When the inferior labial artery was absent, 3 different types were used. The "alar branch" or "nasal branch" was used instead of the "lateral nasal branch." The angular branch was used to refer to several different branches. The presence as a percentage of each branch according to the branches in Gray's Anatomy (premasseteric, inferior labial, superior labial, lateral nasal, and angular) varied. No branch was used with 100% consistency. The superior labial branch was most frequently cited (95.7%, 382 arteries in 399 hemifaces). The angular branch (53.9%, 219 arteries in 406 hemifaces) and the premasseteric branch were least frequently cited (53.8%, 43 arteries in 80 hemifaces). There were significant differences among each of the 5 branches (P < 0.05) except between the angular branch and the premasseteric branch and between the superior labial branch and the inferior labial branch. The authors believe identifying the presence percentage of each branch will be helpful for surgical procedures.

  6. Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine

    PubMed Central

    Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang

    2014-01-01

    Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342

  7. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    PubMed

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  8. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    PubMed Central

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  9. The Effects of Institutional Classification and Gender on Faculty Inclusion of Syllabus Components

    ERIC Educational Resources Information Center

    Doolittle, Peter E.; Lusk, Danielle L.

    2007-01-01

    The purpose of this research was to explore the effects that gender and institutional classification have on the inclusion of syllabus components. Course syllabi (N = 350) written by men and women from seven types of institutions, based on Carnegie classification, were sampled and evaluated for the presence of 26 syllabus components. The gender…

  10. Face recognition using an enhanced independent component analysis approach.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2007-03-01

    This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.

  11. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  12. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  13. Active learning for solving the incomplete data problem in facial age classification by the furthest nearest-neighbor criterion.

    PubMed

    Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun

    2011-07-01

    Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle

  14. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  15. Cranio-facial clefts in pre-hispanic America.

    PubMed

    Marius-Nunez, A L; Wasiak, D T

    2015-10-01

    Among the representations of congenital malformations in Moche ceramic art, cranio-facial clefts have been portrayed in pottery found in Moche burials. These pottery vessels were used as domestic items during lifetime and funerary offerings upon death. The aim of this study was to examine archeological evidence for representations of cranio-facial cleft malformations in Moche vessels. Pottery depicting malformations of the midface in Moche collections in Lima-Peru were studied. The malformations portrayed on pottery were analyzed using the Tessier classification. Photographs were authorized by the Larco Museo.Three vessels were observed to have median cranio-facial dysraphia in association with midline cleft of the lower lip with cleft of the mandible. ML001489 portrays a median cranio-facial dysraphia with an orbital cleft and a midline cleft of the lower lip extending to the mandible. ML001514 represents a median facial dysraphia in association with an orbital facial cleft and a vertical orbital dystopia. ML001491 illustrates a median facial cleft with a soft tissue cleft. Three cases of midline, orbital and lateral facial clefts have been portrayed in Moche full-figure portrait vessels. They represent the earliest registries of congenital cranio-facial malformations in ancient Peru. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.

    PubMed

    Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo

    2015-06-01

    The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.

  17. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  18. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  19. Evaluation of facial attractiveness in black people according to the subjective facial analysis criteria.

    PubMed

    Melo, Andréa Reis de; Conti, Ana Cláudia de Castro Ferreira; Almeida-Pedrin, Renata Rodrigues; Didier, Victor; Valarelli, Danilo Pinelli; Capelozza Filho, Leopoldino

    2017-02-01

    The objective of this study was to evaluate the facial attractiveness in 30 black individuals, according to the Subjective Facial Analysis criteria. Frontal and profile view photographs of 30 black individuals were evaluated for facial attractiveness and classified as esthetically unpleasant, acceptable, or pleasant by 50 evaluators: the 30 individuals from the sample, 10 orthodontists, and 10 laymen. Besides assessing the facial attractiveness, the evaluators had to identify the structures responsible for the classification as unpleasant and pleasant. Intraexaminer agreement was assessed by using Spearman's correlation, correlation within each category using Kendall concordance coefficient, and correlation between the 3 categories using chi-square test and proportions. Most of the frontal (53. 5%) and profile view (54. 9%) photographs were classified as esthetically acceptable. The structures most identified as esthetically unpleasant were the mouth, lips, and face, in the frontal view; and nose and chin in the profile view. The structures most identified as esthetically pleasant were harmony, face, and mouth, in the frontal view; and harmony and nose in the profile view. The ratings by the examiners in the sample and laymen groups showed statistically significant correlation in both views. The orthodontists agreed with the laymen on the evaluation of the frontal view and disagreed on profile view, especially regarding whether the images were esthetically unpleasant or acceptable. Based on these results, the evaluation of facial attractiveness according to the Subjective Facial Analysis criteria proved to be applicable and to have a subjective influence; therefore, it is suggested that the patient's opinion regarding the facial esthetics should be considered in orthodontic treatmentplanning.

  20. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study

    PubMed Central

    Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-01-01

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211

  1. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.

    PubMed

    Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-10-25

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.

  2. A study of patient facial expressivity in relation to orthodontic/surgical treatment.

    PubMed

    Nafziger, Y J

    1994-09-01

    A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip

  3. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Signatures of personality on dense 3D facial images.

    PubMed

    Hu, Sile; Xiong, Jieyi; Fu, Pengcheng; Qiao, Lu; Tan, Jingze; Jin, Li; Tang, Kun

    2017-03-06

    It has long been speculated that cues on the human face exist that allow observers to make reliable judgments of others' personality traits. However, direct evidence of association between facial shapes and personality is missing from the current literature. This study assessed the personality attributes of 834 Han Chinese volunteers (405 males and 429 females), utilising the five-factor personality model ('Big Five'), and collected their neutral 3D facial images. Dense anatomical correspondence was established across the 3D facial images in order to allow high-dimensional quantitative analyses of the facial phenotypes. In this paper, we developed a Partial Least Squares (PLS) -based method. We used composite partial least squares component (CPSLC) to test association between the self-tested personality scores and the dense 3D facial image data, then used principal component analysis (PCA) for further validation. Among the five personality factors, agreeableness and conscientiousness in males and extraversion in females were significantly associated with specific facial patterns. The personality-related facial patterns were extracted and their effects were extrapolated on simulated 3D facial models.

  5. Modern concepts in facial nerve reconstruction

    PubMed Central

    2010-01-01

    Background Reconstructive surgery of the facial nerve is not daily routine for most head and neck surgeons. The published experience on strategies to ensure optimal functional results for the patients are based on small case series with a large variety of surgical techniques. On this background it is worthwhile to develop a standardized approach for diagnosis and treatment of patients asking for facial rehabilitation. Conclusion A standardized approach is feasible: Patients with chronic facial palsy first need an exact classification of the palsy's aetiology. A step-by-step clinical examination, if necessary MRI imaging and electromyographic examination allow a classification of the palsy's aetiology as well as the determination of the severity of the palsy and the functional deficits. Considering the patient's desire, age and life expectancy, an individual surgical concept is applicable using three main approaches: a) early extratemporal reconstruction, b) early reconstruction of proximal lesions if extratemporal reconstruction is not possible, c) late reconstruction or in cases of congenital palsy. Twelve to 24 months after the last step of surgical reconstruction a standardized evaluation of the therapeutic results is recommended to evaluate the necessity for adjuvant surgical procedures or other adjuvant procedures, e.g. botulinum toxin application. Up to now controlled trials on the value of physiotherapy and other adjuvant measures are missing to give recommendation for optimal application of adjuvant therapies. PMID:21040532

  6. Acromegaly determination using discriminant analysis of the three-dimensional facial classification in Taiwanese.

    PubMed

    Wang, Ming-Hsu; Lin, Jen-Der; Chang, Chen-Nen; Chiou, Wen-Ko

    2017-08-01

    The aim of this study was to assess the size, angles and positional characteristics of facial anthropometry between "acromegalic" patients and control subjects. We also identify possible facial soft tissue measurements for generating discriminant functions toward acromegaly determination in males and females for acromegaly early self-awareness. This is a cross-sectional study. Subjects participating in this study included 70 patients diagnosed with acromegaly (35 females and 35 males) and 140 gender-matched control individuals. Three-dimensional facial images were collected via a camera system. Thirteen landmarks were selected. Eleven measurements from the three categories were selected and applied, including five frontal widths, three lateral depths and three lateral angular measurements. Descriptive analyses were conducted using means and standard deviations for each measurement. Univariate and multivariate discriminant function analyses were applied in order to calculate the accuracy of acromegaly detection. Patients with acromegaly exhibit soft-tissue facial enlargement and hypertrophy. Frontal widths as well as lateral depth and angle of facial changes were evident. The average accuracies of all functions for female patient detection ranged from 80.0-91.40%. The average accuracies of all functions for male patient detection were from 81.0-94.30%. The greatest anomaly observed was evidenced in the lateral angles, with greater enlargement of "nasofrontal" angles for females and greater "mentolabial" angles for males. Additionally, shapes of the lateral angles showed changes. The majority of the facial measurements proved dynamic for acromegaly patients; however, it is problematic to detect the disease with progressive body anthropometric changes. The discriminant functions of detection developed in this study could help patients, their families, medical practitioners and others to identify and track progressive facial change patterns before the possible patients

  7. Facial nerve mapping and monitoring in lymphatic malformation surgery.

    PubMed

    Chiara, Jospeh; Kinney, Greg; Slimp, Jefferson; Lee, Gi Soo; Oliaei, Sepehr; Perkins, Jonathan A

    2009-10-01

    Establish the efficacy of preoperative facial nerve mapping and continuous intraoperative EMG monitoring in protecting the facial nerve during resection of cervicofacial lymphatic malformations. Retrospective study in which patients were clinically followed for at least 6 months postoperatively, and long-term outcome was evaluated. Patient demographics, lesion characteristics (i.e., size, stage, location) were recorded. Operative notes revealed surgical techniques, findings, and complications. Preoperative, short-/long-term postoperative facial nerve function was standardized using the House-Brackmann Classification. Mapping was done prior to incision by percutaneously stimulating the facial nerve and its branches and recording the motor responses. Intraoperative monitoring and mapping were accomplished using a four-channel, free-running EMG. Neurophysiologists continuously monitored EMG responses and blindly analyzed intraoperative findings and final EMG interpretations for abnormalities. Seven patients collectively underwent 8 lymphatic malformation surgeries. Median age was 30 months (2-105 months). Lymphatic malformation diagnosis was recorded in 6/8 surgeries. Facial nerve function was House-Brackmann grade I in 8/8 cases preoperatively. Facial nerve was abnormally elongated in 1/8 cases. EMG monitoring recorded abnormal activity in 4/8 cases--two suggesting facial nerve irritation, and two with possible facial nerve damage. Transient or long-term facial nerve paresis occurred in 1/8 cases (House-Brackmann grade II). Preoperative facial nerve mapping combined with continuous intraoperative EMG and mapping is a successful method of identifying the facial nerve course and protecting it from injury during resection of cervicofacial lymphatic malformations involving the facial nerve.

  8. Face-selective regions differ in their ability to classify facial expressions

    PubMed Central

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-01-01

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: The amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  9. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  10. Facial attractiveness.

    PubMed

    Little, Anthony C

    2014-11-01

    Facial attractiveness has important social consequences. Despite a widespread belief that beauty cannot be defined, in fact, there is considerable agreement across individuals and cultures on what is found attractive. By considering that attraction and mate choice are critical components of evolutionary selection, we can better understand the importance of beauty. There are many traits that are linked to facial attractiveness in humans and each may in some way impart benefits to individuals who act on their preferences. If a trait is reliably associated with some benefit to the perceiver, then we would expect individuals in a population to find that trait attractive. Such an approach has highlighted face traits such as age, health, symmetry, and averageness, which are proposed to be associated with benefits and so associated with facial attractiveness. This view may postulate that some traits will be universally attractive; however, this does not preclude variation. Indeed, it would be surprising if there existed a template of a perfect face that was not affected by experience, environment, context, or the specific needs of an individual. Research on facial attractiveness has documented how various face traits are associated with attractiveness and various factors that impact on an individual's judgments of facial attractiveness. Overall, facial attractiveness is complex, both in the number of traits that determine attraction and in the large number of factors that can alter attraction to particular faces. A fuller understanding of facial beauty will come with an understanding of how these various factors interact with each other. WIREs Cogn Sci 2014, 5:621-634. doi: 10.1002/wcs.1316 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  11. The effect of motorcycle helmet type, components and fixation status on facial injury in Klang Valley, Malaysia: a case control study

    PubMed Central

    2014-01-01

    Background The effectiveness of helmets in reducing the risk of severe head injury in motorcyclists who were involved in a crash is well established. There is limited evidence however, regarding the extent to which helmets protect riders from facial injuries. The objective of this study was to determine the effect of helmet type, components and fixation status on the risk of facial injuries among Malaysian motorcyclists. Method 755 injured motorcyclists were recruited over a 12-month period in 2010–2011 in southern Klang Valley, Malaysia in this case control study. Of the 755 injured motorcyclists, 391participants (51.8%) sustained facial injuries (cases) while 364 (48.2%) participants were without facial injury (control). The outcomes of interest were facial injury and location of facial injury (i.e. upper, middle and lower face injuries). A binary logistic regression was conducted to examine the association between helmet characteristics and the outcomes, taking into account potential confounders such as age, riding position, alcohol and illicit substance use, type of colliding vehicle and type of collision. Helmet fixation was defined as the position of the helmet during the crash whether it was still secured on the head or had been dislodged. Results Helmet fixation was shown to have a greater effect on facial injury outcome than helmet type. Increased odds of adverse outcome was observed for the non-fixed helmet compared to the fixed helmet with adjusted odds ratio (AOR) = 2.10 (95% CI 1.41- 3.13) for facial injury; AOR = 6.64 (95% CI 3.71-11.91) for upper face injury; AOR = 5.36 (95% CI 3.05-9.44) for middle face injury; and AOR = 2.00 (95% CI 1.22-3.26) for lower face injury. Motorcyclists with visor damage were shown with AOR = 5.48 (95% CI 1.46-20.57) to have facial injuries compared to those with an undamaged visor. Conclusions A helmet of any type that is properly worn and remains fixed on the head throughout a crash will provide some form of

  12. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    Scherer's Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect

  13. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion

    PubMed Central

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R.

    2015-01-01

    Scherer’s Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect

  14. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  15. Role of facial attractiveness in patients with slight-to-borderline treatment need according to the Aesthetic Component of the Index of Orthodontic Treatment Need as judged by eye tracking.

    PubMed

    Johnson, Elizabeth K; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2017-02-01

    Previous eye-tracking research has demonstrated that laypersons view the range of dental attractiveness levels differently depending on facial attractiveness levels. How the borderline levels of dental attractiveness are viewed has not been evaluated in the context of facial attractiveness and compared with those with near-ideal esthetics or those in definite need of orthodontic treatment according to the Aesthetic Component of the Index of Orthodontic Treatment Need scale. Our objective was to determine the level of viewers' visual attention in its treatment need categories levels 3 to 7 for persons considered "attractive," "average," or "unattractive." Facial images of persons at 3 facial attractiveness levels were combined with 5 levels of dental attractiveness (dentitions representing Aesthetic Component of the Index of Orthodontic Treatment Need levels 3-7) using imaging software to form 15 composite images. Each image was viewed twice by 66 lay participants using eye tracking. Both the fixation density (number of fixations per facial area) and the fixation duration (length of time for each facial area) were quantified for each image viewed. Repeated-measures analysis of variance was used to determine how fixation density and duration varied among the 6 facial interest areas (chin, ear, eye, mouth, nose, and other). Viewers demonstrated excellent to good reliability among the 6 interest areas (intraviewer reliability, 0.70-0.96; interviewer reliability, 0.56-0.93). Between Aesthetic Component of the Index of Orthodontic Treatment Need levels 3 and 7, viewers of all facial attractiveness levels showed an increase in attention to the mouth. However, only with the attractive models were significant differences in fixation density and duration found between borderline levels with female viewers. Female viewers paid attention to different areas of the face than did male viewers. The importance of dental attractiveness is amplified in facially attractive female

  16. Robust artifactual independent component classification for BCI practitioners.

    PubMed

    Winkler, Irene; Brandl, Stephanie; Horn, Franziska; Waldburger, Eric; Allefeld, Carsten; Tangermann, Michael

    2014-06-01

    EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain-computer interfaces (BCIs). Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.

  17. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, J.; Wei, T.Y.C.

    1993-11-23

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system. 5 figures.

  18. System diagnostics using qualitative analysis and component functional classification

    DOEpatents

    Reifman, Jaques; Wei, Thomas Y. C.

    1993-01-01

    A method for detecting and identifying faulty component candidates during off-normal operations of nuclear power plants involves the qualitative analysis of macroscopic imbalances in the conservation equations of mass, energy and momentum in thermal-hydraulic control volumes associated with one or more plant components and the functional classification of components. The qualitative analysis of mass and energy is performed through the associated equations of state, while imbalances in momentum are obtained by tracking mass flow rates which are incorporated into a first knowledge base. The plant components are functionally classified, according to their type, as sources or sinks of mass, energy and momentum, depending upon which of the three balance equations is most strongly affected by a faulty component which is incorporated into a second knowledge base. Information describing the connections among the components of the system forms a third knowledge base. The method is particularly adapted for use in a diagnostic expert system to detect and identify faulty component candidates in the presence of component failures and is not limited to use in a nuclear power plant, but may be used with virtually any type of thermal-hydraulic operating system.

  19. The neurosurgical treatment of neuropathic facial pain.

    PubMed

    Brown, Jeffrey A

    2014-04-01

    This article reviews the definition, etiology and evaluation, and medical and neurosurgical treatment of neuropathic facial pain. A neuropathic origin for facial pain should be considered when evaluating a patient for rhinologic surgery because of complaints of facial pain. Neuropathic facial pain is caused by vascular compression of the trigeminal nerve in the prepontine cistern and is characterized by an intermittent prickling or stabbing component or a constant burning, searing pain. Medical treatment consists of anticonvulsant medication. Neurosurgical treatment may require microvascular decompression of the trigeminal nerve. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Effects of Objective 3-Dimensional Measures of Facial Shape and Symmetry on Perceptions of Facial Attractiveness.

    PubMed

    Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M

    2017-09-01

    Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  1. Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.

    PubMed

    Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C

    2017-11-01

    Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.

  2. Evaluation of facial attractiveness from end-of-treatment facial photographs.

    PubMed

    Shafiee, Roxanne; Korn, Edward L; Pearson, Helmer; Boyd, Robert L; Baumrind, Sheldon

    2008-04-01

    Orthodontists typically make judgments of facial attractiveness by examining groupings of profile, full-face, and smiling photographs considered together as a "triplet." The primary objective of this study was to determine the relative contributions of the 3 photographs-each considered separately-to the overall judgment a clinician forms by examining the combination of the 3. End-of-treatment triplet orthodontic photographs of 45 randomly selected orthodontic patients were duplicated. Copies of the profile, full-face, and smiling images were generated, and the images were separated and then pooled by image type for all subjects. Ten judges ranked the 45 photographs of each image type for facial attractiveness in groups of 9 to 12, from "most attractive" to "least attractive." Each judge also ranked the triplet groupings for the same 45 subjects. The mean attractiveness rankings for each type of photograph were then correlated with the mean rankings of each other and the triplets. The rankings of the 3 image types correlated highly with each other and the rankings of the triplets (P <.0001). The rankings of the smiling photographs were most predictive of the rankings of the triplets (r = 0.93); those of the profile photographs were the least predictive (r = 0.76). The difference between these correlations was highly statistically significant (P = .0003). It was also possible to test the extent to which the judges' rankings were influenced by sex, original Angle classification, and extraction status of each patient. No statistically significant preferences were found for sex or Angle classification, and only 1 marginally significant preference was found for extraction pattern. Clinician judges demonstrated a high level of agreement in ranking the facial attractiveness of profile, full-face, and smiling photographs of a group of orthodontically treated patients whose actual differences in physical dimensions were relatively small. The judges' rankings of the smiling

  3. A PCA-Based method for determining craniofacial relationship and sexual dimorphism of facial shapes.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Maddock, Steve; He, Taiping; Wang, Xingce; Deng, Qingqiong

    2017-11-01

    Previous studies have used principal component analysis (PCA) to investigate the craniofacial relationship, as well as sex determination using facial factors. However, few studies have investigated the extent to which the choice of principal components (PCs) affects the analysis of craniofacial relationship and sexual dimorphism. In this paper, we propose a PCA-based method for visual and quantitative analysis, using 140 samples of 3D heads (70 male and 70 female), produced from computed tomography (CT) images. There are two parts to the method. First, skull and facial landmarks are manually marked to guide the model's registration so that dense corresponding vertices occupy the same relative position in every sample. Statistical shape spaces of the skull and face in dense corresponding vertices are constructed using PCA. Variations in these vertices, captured in every principal component (PC), are visualized to observe shape variability. The correlations of skull- and face-based PC scores are analysed, and linear regression is used to fit the craniofacial relationship. We compute the PC coefficients of a face based on this craniofacial relationship and the PC scores of a skull, and apply the coefficients to estimate a 3D face for the skull. To evaluate the accuracy of the computed craniofacial relationship, the mean and standard deviation of every vertex between the two models are computed, where these models are reconstructed using real PC scores and coefficients. Second, each PC in facial space is analysed for sex determination, for which support vector machines (SVMs) are used. We examined the correlation between PCs and sex, and explored the extent to which the choice of PCs affects the expression of sexual dimorphism. Our results suggest that skull- and face-based PCs can be used to describe the craniofacial relationship and that the accuracy of the method can be improved by using an increased number of face-based PCs. The results show that the accuracy of

  4. The assessment of facial variation in 4747 British school children.

    PubMed

    Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen

    2012-12-01

    The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.

  5. Luminance sticker based facial expression recognition using discrete wavelet transform for physically disabled persons.

    PubMed

    Nagarajan, R; Hariharan, M; Satiyan, M

    2012-08-01

    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.

  6. Facial soft tissue thickness differences among three skeletal classes in Japanese population.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Uchida, Keiichi; Kibayashi, Kazuhiko

    2014-03-01

    Facial reconstruction is used in forensic anthropology to recreate the face from unknown human skeletal remains, and to elucidate the antemortem facial appearance. This requires accurate assessment of the skull (age, sex, ancestry, etc.) and thickness data. However, additional information is required to reconstruct the face as the information obtained from the skull is limited. Here, we aimed to examine the information from the skull that is required for accurate facial reconstruction. The human facial profile is classified into 3 shapes: straight, convex, and concave. These facial profiles facilitate recognition of individuals. The skeletal classes used in orthodontics are classified according to these 3 facial types. We have previously reported the differences between Japanese females. In the present study, we applied this classification for facial tissue measurement, compared the differences in tissue depth of each skeletal class for both sexes in the Japanese population, and elucidated the differences between the skeletal classes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Reliable classification of facial phenotypic variation in craniofacial microsomia: a comparison of physical exam and photographs.

    PubMed

    Birgfeld, Craig B; Heike, Carrie L; Saltzman, Babette S; Leroux, Brian G; Evans, Kelly N; Luquetti, Daniela V

    2016-03-31

    Craniofacial microsomia is a common congenital condition for which children receive longitudinal, multidisciplinary team care. However, little is known about the etiology of craniofacial microsomia and few outcome studies have been published. In order to facilitate large, multicenter studies in craniofacial microsomia, we assessed the reliability of phenotypic classification based on photographs by comparison with direct physical examination. Thirty-nine children with craniofacial microsomia underwent a physical examination and photographs according to a standardized protocol. Three clinicians completed ratings during the physical examination and, at least a month later, using respective photographs for each participant. We used descriptive statistics for participant characteristics and intraclass correlation coefficients (ICCs) to assess reliability. The agreement between ratings on photographs and physical exam was greater than 80 % for all 15 categories included in the analysis. The ICC estimates were higher than 0.6 for most features. Features with the highest ICC included: presence of epibulbar dermoids, ear abnormalities, and colobomas (ICC 0.85, 0.81, and 0.80, respectively). Orbital size, presence of pits, tongue abnormalities, and strabismus had the lowest ICC, values (0.17 or less). There was not a strong tendency for either type of rating, physical exam or photograph, to be more likely to designate a feature as abnormal. The agreement between photographs and physical exam regarding the presence of a prior surgery was greater than 90 % for most features. Our results suggest that categorization of facial phenotype in children with CFM based on photographs is reliable relative to physical examination for most facial features.

  8. [Research on fast classification based on LIBS technology and principle component analyses].

    PubMed

    Yu, Qi; Ma, Xiao-Hong; Wang, Rui; Zhao, Hua-Feng

    2014-11-01

    Laser-induced breakdown spectroscopy (LIBS) and the principle component analysis (PCA) were combined to study aluminum alloy classification in the present article. Classification experiments were done on thirteen different kinds of standard samples of aluminum alloy which belong to 4 different types, and the results suggested that the LIBS-PCA method can be used to aluminum alloy fast classification. PCA was used to analyze the spectrum data from LIBS experiments, three principle components were figured out that contribute the most, the principle component scores of the spectrums were calculated, and the scores of the spectrums data in three-dimensional coordinates were plotted. It was found that the spectrum sample points show clear convergence phenomenon according to the type of aluminum alloy they belong to. This result ensured the three principle components and the preliminary aluminum alloy type zoning. In order to verify its accuracy, 20 different aluminum alloy samples were used to do the same experiments to verify the aluminum alloy type zoning. The experimental result showed that the spectrum sample points all located in their corresponding area of the aluminum alloy type, and this proved the correctness of the earlier aluminum alloy standard sample type zoning method. Based on this, the identification of unknown type of aluminum alloy can be done. All the experimental results showed that the accuracy of principle component analyses method based on laser-induced breakdown spectroscopy is more than 97.14%, and it can classify the different type effectively. Compared to commonly used chemical methods, laser-induced breakdown spectroscopy can do the detection of the sample in situ and fast with little sample preparation, therefore, using the method of the combination of LIBS and PCA in the areas such as quality testing and on-line industrial controlling can save a lot of time and cost, and improve the efficiency of detection greatly.

  9. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  10. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  11. What does magnetic resonance imaging add to the prenatal ultrasound diagnosis of facial clefts?

    PubMed

    Mailáth-Pokorny, M; Worda, C; Krampl-Bettelheim, E; Watzinger, F; Brugger, P C; Prayer, D

    2010-10-01

    Ultrasound is the modality of choice for prenatal detection of cleft lip and palate. Because its accuracy in detecting facial clefts, especially isolated clefts of the secondary palate, can be limited, magnetic resonance imaging (MRI) is used as an additional method for assessing the fetus. The aim of this study was to investigate the role of fetal MRI in the prenatal diagnosis of facial clefts. Thirty-four pregnant women with a mean gestational age of 26 (range, 19-34) weeks underwent in utero MRI, after ultrasound examination had identified either a facial cleft (n = 29) or another suspected malformation (micrognathia (n = 1), cardiac defect (n = 1), brain anomaly (n = 2) or diaphragmatic hernia (n = 1)). The facial cleft was classified postnatally and the diagnoses were compared with the previous ultrasound findings. There were 11 (32.4%) cases with cleft of the primary palate alone, 20 (58.8%) clefts of the primary and secondary palate and three (8.8%) isolated clefts of the secondary palate. In all cases the primary and secondary palate were visualized successfully with MRI. Ultrasound imaging could not detect five (14.7%) facial clefts and misclassified 15 (44.1%) facial clefts. The MRI classification correlated with the postnatal/postmortem diagnosis. In our hands MRI allows detailed prenatal evaluation of the primary and secondary palate. By demonstrating involvement of the palate, MRI provides better detection and classification of facial clefts than does ultrasound alone. Copyright © 2010 ISUOG. Published by John Wiley & Sons, Ltd.

  12. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  13. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  14. [Study on clinical effectiveness of acupuncture and moxibustion on acute Bell's facial paralysis: randomized controlled clinical observation].

    PubMed

    Wu, Bin; Li, Ning; Liu, Yi; Huang, Chang-qiong; Zhang, Yong-ling

    2006-03-01

    To investigate the adverse effects of acupuncture on the prognosis, and effectiveness of acupuncture combined with far infrared ray in the patient of acute Bell's facial paralysis within 48 h. Clinically randomized controlled trial was used, and the patients were divided into 3 groups: group A (early acupuncture group), group B (acupuncture combined with far infrared ray) and group C (acupuncture after 7 days). The facial nerve functional classification at the attack, 7 days after the attack and after treatment, the clinically cured rate of following-up of 6 months, and the average cured time, the cured time of complete facial paralysis were observed in the 3 groups. There were no significant differences among the 3 groups in the facial nerve functional classification 7 days after the attack, the clinically cured rate of following-up of 6 months and the average cured time (P > 0.05), but the cured time of complete facial paralysis in the group A and the group B were shorter than that in the group C (P < 0.05). The patient of acute Bell's facial paralysis can be treated with acupuncture and moxibustion, and traditional moxibustion can be replaced by far infrared way.

  15. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    NASA Astrophysics Data System (ADS)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  16. Classification of fMRI independent components using IC-fingerprints and support vector machine classifiers.

    PubMed

    De Martino, Federico; Gentile, Francesco; Esposito, Fabrizio; Balsi, Marco; Di Salle, Francesco; Goebel, Rainer; Formisano, Elia

    2007-01-01

    We present a general method for the classification of independent components (ICs) extracted from functional MRI (fMRI) data sets. The method consists of two steps. In the first step, each fMRI-IC is associated with an IC-fingerprint, i.e., a representation of the component in a multidimensional space of parameters. These parameters are post hoc estimates of global properties of the ICs and are largely independent of a specific experimental design and stimulus timing. In the second step a machine learning algorithm automatically separates the IC-fingerprints into six general classes after preliminary training performed on a small subset of expert-labeled components. We illustrate this approach in a multisubject fMRI study employing visual structure-from-motion stimuli encoding faces and control random shapes. We show that: (1) IC-fingerprints are a valuable tool for the inspection, characterization and selection of fMRI-ICs and (2) automatic classifications of fMRI-ICs in new subjects present a high correspondence with those obtained by expert visual inspection of the components. Importantly, our classification procedure highlights several neurophysiologically interesting processes. The most intriguing of which is reflected, with high intra- and inter-subject reproducibility, in one IC exhibiting a transiently task-related activation in the 'face' region of the primary sensorimotor cortex. This suggests that in addition to or as part of the mirror system, somatotopic regions of the sensorimotor cortex are involved in disambiguating the perception of a moving body part. Finally, we show that the same classification algorithm can be successfully applied, without re-training, to fMRI collected using acquisition parameters, stimulation modality and timing considerably different from those used for training.

  17. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  18. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree

  20. An optimized ERP brain-computer interface based on facial expression changes.

    PubMed

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by

  1. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  2. Facial Scar Revision: Understanding Facial Scar Treatment

    MedlinePlus

    ... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...

  3. Human Facial Shape and Size Heritability and Genetic Correlations.

    PubMed

    Cole, Joanne B; Manyama, Mange; Larson, Jacinda R; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Li, Mao; Mio, Washington; Klein, Ophir D; Santorico, Stephanie A; Hallgrímsson, Benedikt; Spritz, Richard A

    2017-02-01

    The human face is an array of variable physical features that together make each of us unique and distinguishable. Striking familial facial similarities underscore a genetic component, but little is known of the genes that underlie facial shape differences. Numerous studies have estimated facial shape heritability using various methods. Here, we used advanced three-dimensional imaging technology and quantitative human genetics analysis to estimate narrow-sense heritability, heritability explained by common genetic variation, and pairwise genetic correlations of 38 measures of facial shape and size in normal African Bantu children from Tanzania. Specifically, we fit a linear mixed model of genetic relatedness between close and distant relatives to jointly estimate variance components that correspond to heritability explained by genome-wide common genetic variation and variance explained by uncaptured genetic variation, the sum representing total narrow-sense heritability. Our significant estimates for narrow-sense heritability of specific facial traits range from 28 to 67%, with horizontal measures being slightly more heritable than vertical or depth measures. Furthermore, for over half of facial traits, >90% of narrow-sense heritability can be explained by common genetic variation. We also find high absolute genetic correlation between most traits, indicating large overlap in underlying genetic loci. Not surprisingly, traits measured in the same physical orientation (i.e., both horizontal or both vertical) have high positive genetic correlations, whereas traits in opposite orientations have high negative correlations. The complex genetic architecture of facial shape informs our understanding of the intricate relationships among different facial features as well as overall facial development. Copyright © 2017 by the Genetics Society of America.

  4. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    PubMed

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  5. Assessment of facial golden proportions among young Japanese women.

    PubMed

    Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C

    2009-08-01

    Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).

  6. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  7. 15 years of research on Oral-Facial-Digital syndromes: from 1 to 16 causal genes

    PubMed Central

    Bruel, Ange-Line; Franco, Brunella; Duffourd, Yannis; Thevenon, Julien; Jego, Laurence; Lopez, Estelle; Deleuze, Jean-François; Doummar, Diane; Giles, Rachel H.; Johnson, Colin A.; Huynen, Martijn A.; Chevrier, Véronique; Burglen, Lydie; Morleo, Manuela; Desguerres, Isabelle; Pierquin, Geneviève; Doray, Bérénice; Gilbert-Dussardier, Brigitte; Reversade, Bruno; Steichen-Gersdorf, Elisabeth; Baumann, Clarisse; Panigrahi, Inusha; Fargeot-Espaliat, Anne; Dieux, Anne; David, Albert; Goldenberg, Alice; Bongers, Ernie; Gaillard, Dominique; Argente, Jesús; Aral, Bernard; Gigot, Nadège; St-Onge, Judith; Birnbaum, Daniel; Phadke, Shubha R.; Cormier-Daire, Valérie; Eguether, Thibaut; Pazour, Gregory J.; Herranz-Pérez, Vicente; Lee, Jaclyn S.; Pasquier, Laurent; Loget, Philippe; Saunier, Sophie; Mégarbané, André; Rosnet, Olivier; Leroux, Michel R.; Wallingford, John B.; Blacque, Oliver E.; Nachury, Maxence V.; Attie-Bitach, Tania; Rivière, Jean-Baptiste; Faivre, Laurence; Thauvin-Robinet, Christel

    2017-01-01

    Oral-facial-digital syndromes (OFDS) gather rare genetic disorders characterized by facial, oral and digital abnormalities associated with a wide range of additional features (polycystic kidney disease, cerebral malformations and several others) to delineate a growing list of OFD subtypes. The most frequent, OFD type I, is caused by a heterozygous mutation in the OFD1 gene encoding a centrosomal protein. The wide clinical heterogeneity of OFDS suggests the involvement of other ciliary genes. For 15 years, we have aimed to identify the molecular bases of OFDS. This effort has been greatly helped by the recent development of whole exome sequencing (WES). Here, we present all our published and unpublished results for WES in 24 OFDS cases. We identified causal variants in five new genes (C2CD3, TMEM107, INTU, KIAA0753, IFT57) and related the clinical spectrum of four genes in other ciliopathies (C5orf42, TMEM138, TMEM231, WDPCP) to OFDS. Mutations were also detected in two genes previously implicated in OFDS. Functional studies revealed the involvement of centriole elongation, transition zone and intraflagellar transport defects in OFDS, thus characterizing three ciliary protein modules: the complex KIAA0753-FOPNL-OFD1, a regulator of centriole elongation; the MKS module, a major component of the transition zone; and the CPLANE complex necessary for IFT-A assembly. OFDS now appear to be a distinct subgroup of ciliopathies with wide heterogeneity, which makes the initial classification obsolete. A clinical classification restricted to the three frequent/well-delineated subtypes could be proposed, and for patients who do not fit one of these 3 main subtypes, a further classification could be based on the genotype. PMID:28289185

  8. Common component classification: what can we learn from machine learning?

    PubMed

    Anderson, Ariana; Labus, Jennifer S; Vianna, Eduardo P; Mayer, Emeran A; Cohen, Mark S

    2011-05-15

    Machine learning methods have been applied to classifying fMRI scans by studying locations in the brain that exhibit temporal intensity variation between groups, frequently reporting classification accuracy of 90% or better. Although empirical results are quite favorable, one might doubt the ability of classification methods to withstand changes in task ordering and the reproducibility of activation patterns over runs, and question how much of the classification machines' power is due to artifactual noise versus genuine neurological signal. To examine the true strength and power of machine learning classifiers we create and then deconstruct a classifier to examine its sensitivity to physiological noise, task reordering, and across-scan classification ability. The models are trained and tested both within and across runs to assess stability and reproducibility across conditions. We demonstrate the use of independent components analysis for both feature extraction and artifact removal and show that removal of such artifacts can reduce predictive accuracy even when data has been cleaned in the preprocessing stages. We demonstrate how mistakes in the feature selection process can cause the cross-validation error seen in publication to be a biased estimate of the testing error seen in practice and measure this bias by purposefully making flawed models. We discuss other ways to introduce bias and the statistical assumptions lying behind the data and model themselves. Finally we discuss the complications in drawing inference from the smaller sample sizes typically seen in fMRI studies, the effects of small or unbalanced samples on the Type 1 and Type 2 error rates, and how publication bias can give a false confidence of the power of such methods. Collectively this work identifies challenges specific to fMRI classification and methods affecting the stability of models. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. [Effects of a Facial Muscle Exercise Program including Facial Massage for Patients with Facial Palsy].

    PubMed

    Choi, Hyoung Ju; Shin, Sung Hee

    2016-08-01

    The purpose of this study was to examine the effects of a facial muscle exercise program including facial massage on the facial muscle function, subjective symptoms related to paralysis and depression in patients with facial palsy. This study was a quasi-experimental research with a non-equivalent control group non-synchronized design. Participants were 70 patients with facial palsy (experimental group 35, control group 35). For the experimental group, the facial muscular exercise program including facial massage was performed 20 minutes a day, 3 times a week for two weeks. Data were analyzed using descriptive statistics, χ²-test, Fisher's exact test and independent sample t-test with the SPSS 18.0 program. Facial muscular function of the experimental group improved significantly compared to the control group. There was no significant difference in symptoms related to paralysis between the experimental group and control group. The level of depression in the experimental group was significantly lower than the control group. Results suggest that a facial muscle exercise program including facial massage is an effective nursing intervention to improve facial muscle function and decrease depression in patients with facial palsy.

  10. Facial expressions of emotion and psychopathology in adolescent boys.

    PubMed

    Keltner, D; Moffitt, T E; Stouthamer-Loeber, M

    1995-11-01

    On the basis of the widespread belief that emotions underpin psychological adjustment, the authors tested 3 predicted relations between externalizing problems and anger, internalizing problems and fear and sadness, and the absence of externalizing problems and social-moral emotion (embarrassment). Seventy adolescent boys were classified into 1 of 4 comparison groups on the basis of teacher reports using a behavior problem checklist: internalizers, externalizers, mixed (both internalizers and externalizers), and nondisordered boys. The authors coded the facial expressions of emotion shown by the boys during a structured social interaction. Results supported the 3 hypotheses: (a) Externalizing adolescents showed increased facial expressions of anger, (b) on 1 measure internalizing adolescents showed increased facial expressions of fear, and (c) the absence of externalizing problems (or nondisordered classification) was related to increased displays of embarrassment. Discussion focused on the relations of these findings to hypotheses concerning the role of impulse control in antisocial behavior.

  11. Facial transplantation: A concise update

    PubMed Central

    Barrera-Pulido, Fernando; Gomez-Cia, Tomas; Sicilia-Castro, Domingo; Garcia-Perla-Garcia, Alberto; Gacto-Sanchez, Purificacion; Hernandez-Guisado, Jose-Maria; Lagares-Borrego, Araceli; Narros-Gimenez, Rocio; Gonzalez-Padilla, Juan D.

    2013-01-01

    Objectives: Update on clinical results obtained by the first worldwide facial transplantation teams as well as review of the literature concerning the main surgical, immunological, ethical, and follow-up aspects described on facial transplanted patients. Study design: MEDLINE search of articles published on “face transplantation” until March 2012. Results: Eighteen clinical cases were studied. The mean patient age was 37.5 years, with a higher prevalence of men. Main surgical indication was gunshot injuries (6 patients). All patients had previously undergone multiple conventional surgical reconstructive procedures which had failed. Altogether 8 transplant teams belonging to 4 countries participated. Thirteen partial face transplantations and 5 full face transplantations have been performed. Allografts are varied according to face anatomical components and the amount of skin, muscle, bone, and other tissues included, though all were grafted successfully and remained viable without significant postoperative surgical complications. The patient with the longest follow-up was 5 years. Two patients died 2 and 27 months after transplantation. Conclusions: Clinical experience has demonstrated the feasibility of facial transplantation as a valuable reconstructive option, but it still remains considered as an experimental procedure with unresolved issues to settle down. Results show that from a clinical, technical, and immunological standpoint, facial transplantation has achieved functional, aesthetic, and social rehabilitation in severely facial disfigured patients. Key words:Face transplantation, composite tissue transplantation, face allograft, facial reconstruction, outcomes and complications of face transplantation. PMID:23229268

  12. Folliculotropism in pigmented facial macules: Differential diagnosis with reflectance confocal microscopy.

    PubMed

    Persechino, Flavia; De Carvalho, Nathalie; Ciardo, Silvana; De Pace, Barbara; Casari, Alice; Chester, Johanna; Kaleci, Shaniko; Stanganelli, Ignazio; Longo, Caterina; Farnetani, Francesca; Pellacani, Giovanni

    2018-03-01

    Pigmented facial macules are common on sun damage skin. The diagnosis of early stage lentigo maligna (LM) and lentigo maligna melanoma (LMM) is challenging. Reflectance confocal microscopy (RCM) has been proven to increase diagnostic accuracy of facial lesions. A total of 154 pigmented facial macules, retrospectively collected, were evaluated for the presence of already-described RCM features and new parameters depicting aspects of the follicle. Melanocytic nests, roundish pagetoid cells, follicular infiltration, bulgings from the follicles and many bright dendrites and infiltration of the hair follicle (ie, folliculotropism) were found to be indicative of LM/LMM compared to non-melanocytic skin neoplasms (NMSNs), with an overall sensitivity of 96% and specificity of 83%. Concerning NMSNs, solar lentigo and lichen planus-like keratosis resulted better distinguishable from LM/LMM because usually lacking malignant features and presenting characteristic diagnostic parameters, such as epidermal cobblestone pattern and polycyclic papillary contours. On the other hand, distinction of pigmented actinic keratosis (PAK) resulted more difficult, and needing evaluation of hair follicle infiltration and bulging structures, due to the frequent observation of few bright dendrites in the epidermis, but predominantly not infiltrating the hair follicle (estimated specificity for PAK 53%). A detailed evaluation of the components of the folliculotropism may help to improve the diagnostic accuracy. The classification of the type, distribution and amount of cells, and the presence of bulging around the follicles seem to represent important tools for the differentiation between PAK and LM/LMM at RCM analysis. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  14. Facial duplication: case, review, and embryogenesis.

    PubMed

    Barr, M

    1982-04-01

    The craniofacial anatomy of an infant with facial duplication is described. There were four eyes, two noses, two maxillae, and one mandible. Anterior to the single pituitary the brain was duplicated and there was bilateral arhinencephaly. Portions of the brain were extruded into a large frontal encephalocele. Cases of symmetrical facial duplication reported in the literature range from two complete faces on a single head (diprosopus) to simple nasal duplication. The variety of patterns of duplication suggests that the doubling of facial components arises in several different ways: Forking of the notochord, duplication of the prosencephalon, duplication of the olfactory placodes, and duplication of maxillary and/or mandibular growth centers around the margins of the stomatodeal plate. Among reported cases, the female:male ratio is 2:1.

  15. Automatic recognition of emotions from facial expressions

    NASA Astrophysics Data System (ADS)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  16. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  17. Overview of Facial Plastic Surgery and Current Developments

    PubMed Central

    Chuang, Jessica; Barnes, Christian; Wong, Brian J. F.

    2016-01-01

    Facial plastic surgery is a multidisciplinary specialty largely driven by otolaryngology but includes oral maxillary surgery, dermatology, ophthalmology, and plastic surgery. It encompasses both reconstructive and cosmetic components. The scope of practice for facial plastic surgeons in the United States may include rhinoplasty, browlifts, blepharoplasty, facelifts, microvascular reconstruction of the head and neck, craniomaxillofacial trauma reconstruction, and correction of defects in the face after skin cancer resection. Facial plastic surgery also encompasses the use of injectable fillers, neural modulators (e.g., BOTOX Cosmetic, Allergan Pharmaceuticals, Westport, Ireland), lasers, and other devices aimed at rejuvenating skin. Facial plastic surgery is a constantly evolving field with continuing innovative advances in surgical techniques and cosmetic adjunctive technologies. This article aims to give an overview of the various procedures that encompass the field of facial plastic surgery and to highlight the recent advances and trends in procedures and surgical techniques. PMID:28824978

  18. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  19. Emotion categories and dimensions in the facial communication of affect: An integrated approach.

    PubMed

    Mehu, Marc; Scherer, Klaus R

    2015-12-01

    We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements. (c) 2015 APA, all rights reserved).

  20. Support vector machine and principal component analysis for microarray data classification

    NASA Astrophysics Data System (ADS)

    Astuti, Widi; Adiwijaya

    2018-03-01

    Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.

  1. Neural Processing of Facial Identity and Emotion in Infants at High-Risk for Autism Spectrum Disorders

    PubMed Central

    Fox, Sharon E.; Wagner, Jennifer B.; Shrock, Christine L.; Tager-Flusberg, Helen; Nelson, Charles A.

    2013-01-01

    Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7-month-old infants at high-risk for developing autism and typically developing controls at low-risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near-Infrared Spectroscopy. In addition, we employed independent component analysis, as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. PMID:23576966

  2. [Facial palsy].

    PubMed

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  3. Sound-induced facial synkinesis following facial nerve paralysis.

    PubMed

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  4. Facial disability index (FDI): Adaptation to Spanish, reliability and validity

    PubMed Central

    Gonzalez-Cardero, Eduardo; Cayuela, Aurelio; Acosta-Feria, Manuel; Gutierrez-Perez, Jose-Luis

    2012-01-01

    Objectives: To adapt to Spanish the facial disability index (FDI) described by VanSwearingen and Brach in 1995 and to assess its reliability and validity in patients with facial nerve paresis after parotidectomy. Study Design: The present study was conducted in two different stages: a) cross-cultural adaptation of the questionnaire and b) cross-sectional study of a control group of 79 Spanish-speaking patients who suffered facial paresis after superficial parotidectomy with facial nerve preservation. The cross-cultural adaptation process comprised the following stages: (I) initial translation, (II) synthesis of the translated document, (III) retro-translation, (IV) review by a board of experts, (V) pilot study of the pre-final draft and (VI) analysis of the pilot study and final draft. Results: The reliability and internal consistency of every one of the rating scales included in the FDI (Cronbach’s alpha coefficient) was 0.83 for the complete scale and 0.77 and 0.82 for the physical and the social well-being subscales. The analysis of the factorial validity of the main components of the adapted FDI yielded similar results to the original questionnaire. Bivariate correlations between FDI and House-Brackmann scale were positive. The variance percentage was calculated for all FDI components. Conclusions: The FDI questionnaire is a specific instrument for assessing facial neuromuscular dysfunction which becomes a useful tool in order to determine quality of life in patients with facial nerve paralysis. Spanish adapted FDI is equivalent to the original questionnaire and shows similar reliability and validity. The proven reproducibi-lity, reliability and validity of this questionnaire make it a useful additional tool for evaluating the impact of facial nerve paralysis in Spanish-speaking patients. Key words:Parotidectomy, facial nerve paralysis, facial disability. PMID:22926474

  5. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  6. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  7. Surface facial modelling and allometry in relation to sexual dimorphism.

    PubMed

    Velemínská, J; Bigoni, L; Krajíček, V; Borský, J; Šmahelová, D; Cagáňová, V; Peterka, M

    2012-04-01

    Sexual dimorphism is responsible for a substantial part of human facial variability, the study of which is essential for many scientific fields ranging from evolution to special biomedical topics. Our aim was to analyse the relationship between size variability and shape facial variability of sexual traits in the young adult Central European population and to construct average surface models of adult males and females. The method of geometric morphometrics allowed not only the identification of dimorphic traits, but also the evaluation of static allometry and the visualisation of sexual facial differences. Facial variability in the studied sample was characterised by a strong relationship between facial size and shape of sexual dimorphic traits. Large size of face was associated with facial elongation and vice versa. Regarding shape sexual dimorphic traits, a wide, vaulted and high forehead in combination with a narrow and gracile lower face were typical for females. Variability in shape dimorphic traits was smaller in females compared to males. For female classification, shape sexual dimorphic traits are more important, while for males the stronger association is with face size. Males generally had a closer inter-orbital distance and a deeper position of the eyes in relation to the facial plane, a larger and wider straight nose and nostrils, and more massive lower face. Using pseudo-colour maps to provide a detailed schematic representation of the geometrical differences between the sexes, we attempted to clarify the reasons underlying the development of such differences. Copyright © 2012 Elsevier GmbH. All rights reserved.

  8. Age-related differences in morphological characteristics of residual skin surface components collected from the surface of facial skin of healthy male volunteers.

    PubMed

    Chalyk, N E; Bandaletova, T Y; Kyle, N H; Petyaev, I M

    2017-05-01

    Global increase of human longevity results in the emergence of previously ignored ageing-related problems. Skin ageing is a well-known phenomenon, but active search for scientific approaches to its prevention and even skin rejuvenation is a relatively new area. Although the structure and composition of the stratum corneum (SC), the superficial layer of epidermis, is well studied, relatively little is known about the residual skin surface components (RSSC) that overlay the surface of the SC. The aim of this study was to examine morphological features of RSSC samples non-invasively collected from the surface of human facial skin for the presence of age-related changes. Residual skin surface component samples were collected by swabbing from the surface of facial skin of 60 adult male volunteers allocated in two age groups: 34 subjects aged in the range 18-32 years and 26 subjects aged in the range 58-72 years. The collected samples were analysed microscopically: the size of the lipid droplets was measured; desquamated corneocytes and lipid crystals were counted; and microbial presence was assessed semi-quantitatively. Age-related changes were revealed for all studied components of the RSSC. There was a significant (P = 0.0126) decrease in the size of lipid droplets among older men. Likewise, significantly (P = 0.0252) lower numbers of lipid crystals were present in this group. In contrast, microbial presence in the RSSC was significantly (P = 0.0019) increased in the older group. There was also a trend towards more abundant corneocyte desquamation among older men, but the difference has not reached statistical significance (P = 0.0636). Non-invasively collected RSSC samples present an informative material for studying age-related changes on the surface of the SC of human facial skin. The results of this study confirm earlier observations regarding age-associated decline of the efficiency of the epidermal barrier and can be used for testing new approaches to skin

  9. Facial expressions and pair bonds in hylobatids.

    PubMed

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony

  10. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  11. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    PubMed

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  12. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  13. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  14. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    PubMed

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  15. Maxillectomy defects: a suggested classification scheme.

    PubMed

    Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F

    2013-06-01

    The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.

  16. Facial EMG responses to emotional expressions are related to emotion perception ability.

    PubMed

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  17. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  18. The Not Face: A grammaticalization of facial expressions of emotion

    PubMed Central

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  19. Impact of automobile restraint device utilization on facial fractures and fiscal implications for plastic surgeons.

    PubMed

    Adkinson, Joshua M; Murphy, Robert X

    2011-05-01

    In 2009, the National Highway Traffic Safety Administration projected that 33,963 people would die and millions would be injured in motor vehicle collisions (MVC). Multiple studies have evaluated the impact of restraint devices in MVCs. This study examines longitudinal changes in facial fractures after MVC as result of utilization of restraint devices. The Pennsylvania Trauma Systems Foundation-Pennsylvania Trauma Outcomes Study database was queried for MVCs from 1989 to 2009. Restraint device use was noted, and facial fractures were identified by International Classification of Diseases-ninth revision codes. Surgeon cost data were extrapolated. More than 15,000 patients sustained ≥1 facial fracture. Only orbital blowout fractures increased over 20 years. Patients were 2.1% less likely every year to have ≥1 facial fracture, which translated into decreased estimated surgeon charges. Increased use of protective devices by patients involved in MVCs resulted in a change in incidence of different facial fractures with reduced need for reconstructive surgery.

  20. Impaired perception of facial emotion in developmental prosopagnosia.

    PubMed

    Biotti, Federica; Cook, Richard

    2016-08-01

    Developmental prosopagnosia (DP) is a neurodevelopmental condition characterised by difficulties recognising faces. Despite severe difficulties recognising facial identity, expression recognition is typically thought to be intact in DP; case studies have described individuals who are able to correctly label photographic displays of facial emotion, and no group differences have been reported. This pattern of deficits suggests a locus of impairment relatively late in the face processing stream, after the divergence of expression and identity analysis pathways. To date, however, there has been little attempt to investigate emotion recognition systematically in a large sample of developmental prosopagnosics using sensitive tests. In the present study, we describe three complementary experiments that examine emotion recognition in a sample of 17 developmental prosopagnosics. In Experiment 1, we investigated observers' ability to make binary classifications of whole-face expression stimuli drawn from morph continua. In Experiment 2, observers judged facial emotion using only the eye-region (the rest of the face was occluded). Analyses of both experiments revealed diminished ability to classify facial expressions in our sample of developmental prosopagnosics, relative to typical observers. Imprecise expression categorisation was particularly evident in those individuals exhibiting apperceptive profiles, associated with problems encoding facial shape accurately. Having split the sample of prosopagnosics into apperceptive and non-apperceptive subgroups, only the apperceptive prosopagnosics were impaired relative to typical observers. In our third experiment, we examined the ability of observers' to classify the emotion present within segments of vocal affect. Despite difficulties judging facial emotion, the prosopagnosics exhibited excellent recognition of vocal affect. Contrary to the prevailing view, our results suggest that many prosopagnosics do experience difficulties

  1. Spatially generalizable representations of facial expressions: Decoding across partial face samples.

    PubMed

    Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W

    2018-04-01

    A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.

    PubMed

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.

  3. Facial approximation-from facial reconstruction synonym to face prediction paradigm.

    PubMed

    Stephan, Carl N

    2015-05-01

    Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.

  4. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  5. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  6. Realistic facial animation generation based on facial expression mapping

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  7. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage

    PubMed Central

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. Results: In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. Conclusions: The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh. PMID:26550216

  8. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  9. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Effect of air bags and restraining devices on the pattern of facial fractures in motor vehicle crashes.

    PubMed

    Simoni, Payman; Ostendorf, Robert; Cox, Artemus J

    2003-01-01

    To examine the relationship between the use of restraining devices and the incidence of specific facial fractures in motor vehicle crashes. Retrospective analysis of patients with facial fractures following a motor vehicle crash. University of Alabama at Birmingham Hospital level I trauma center from 1996 to 2000. Of 3731 patients involved in motor vehicle crashes, a total of 497 patients were found to have facial fractures as determined by International Classification of Diseases, Ninth Revision (ICD-9) codes. Facial fractures were categorized as mandibular, orbital, zygomaticomaxillary complex (ZMC), and nasal. Use of seat belts alone was more effective in decreasing the chance of facial fractures in this population (from 17% to 8%) compared with the use of air bags alone (17% to 11%). The use of seat belts and air bags together decreased the incidence of facial fractures from 17% to 5%. Use of restraining devices in vehicles significantly reduces the chance of incurring facial fractures in a severe motor vehicle crash. However, use of air bags and seat belts does not change the pattern of facial fractures greatly except for ZMC fractures. Air bags are least effective in preventing ZMC fractures. Improving the mechanics of restraining devices might be needed to minimize facial fractures.

  11. Easy facial analysis using the facial golden mask.

    PubMed

    Kim, Yong-Ha

    2007-05-01

    For over 2000 years, many artists and scientists have tried to understand or quantify the form of the perfect, ideal, or most beautiful face both in art and in vivo (life). A mathematical relationship has been consistently and repeatedly reported to be present in beautiful things. This particular relationship is the golden ratio. It is a mathematical ratio of 1.618:1 that seems to appear recurrently in beautiful things in nature as well as in other things that are seen as beautiful. Dr. Marquardt made the facial golden mask that contains and includes all of the one-dimensional and two-dimensional geometric golden elements formed from the golden ratio. The purpose of this study is to evaluate the usefulness of the golden facial mask. In 40 cases, the authors applied the facial golden mask to preoperative and postoperative photographs and scored each photograph on a 1 to 5 scale from the perspective of their personal aesthetic views. The score was lower when the facial deformity was severe, whereas it was higher when the face was attractive. Compared with the average scores of facial mask applied photographs and nonapplied photographs using a nonparametric test, statistical significance was not reached (P > 0.05). This implies that the facial golden mask may be used as an analytical tool. The facial golden mask is easy to apply, inexpensive, and relatively objective. Therefore, the authors introduce it as a useful facial analysis.

  12. Social perception of morbidity in facial nerve paralysis.

    PubMed

    Li, Matthew Ka Ki; Niles, Navin; Gore, Sinclair; Ebrahimi, Ardalan; McGuinness, John; Clark, Jonathan Robert

    2016-08-01

    There are many patient-based and clinician-based scales measuring the severity of facial nerve paralysis and the impact on quality of life, however, the social perception of facial palsy has received little attention. The purpose of this pilot study was to measure the consequences of facial paralysis on selected domains of social perception and compare the social impact of paralysis of the different components. Four patients with typical facial palsies (global, marginal mandibular, zygomatic/buccal, and frontal) and 1 control were photographed. These images were each shown to 100 participants who subsequently rated variables of normality, perceived distress, trustworthiness, intelligence, interaction, symmetry, and disability. Statistical analysis was performed to compare the results among each palsy. Paralyzed faces were considered less normal compared to the control on a scale of 0 to 10 (mean, 8.6; 95% confidence interval [CI] = 8.30-8.86) with global paralysis (mean, 3.4; 95% CI = 3.08-3.80) rated as the most disfiguring, followed by the zygomatic/buccal (mean, 6.0; 95% CI = 5.68-6.37), marginal (mean, 6.5; 95% CI = 6.08-6.86), and then temporal palsies (mean, 6.9; 95% CI = 6.57-7.21). Similar trends were seen when analyzing these palsies for perceived distress, intelligence, and trustworthiness, using a random effects regression model. Our sample suggests that society views paralyzed faces as less normal, less trustworthy, and more distressed. Different components of facial paralysis are worse than others and surgical correction may need to be prioritized in an evidence-based manner with social morbidity in mind. © 2016 Wiley Periodicals, Inc. Head Neck 38:1158-1163, 2016. © 2016 Wiley Periodicals, Inc.

  13. Facial EMG Responses to Emotional Expressions Are Related to Emotion Perception Ability

    PubMed Central

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a “reactivation” of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG) - in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective. PMID:24489647

  14. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  16. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    PubMed

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.

    PubMed

    Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S

    2018-04-01

    This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n  = 192) and 10.3% were female ( n  = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n  = 70), 25% were aged 41 to 50 years ( n  = 53), 21.4% were aged 51 to 60 years ( n  = 46), and 20.5% were older than 60 years ( n  = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Classification of Hypertrophy of Labia Minora: Consideration of a Multiple Component Approach.

    PubMed

    González, Pablo I

    2015-11-01

    Labia minora hypertrophy of unknown and under-reported incidence in the general population is considered a variant of normal anatomy. Its origin is multi-factorial including genetic, hormonal, and infectious factors, and voluntary elongation of the labiae minorae in some cultures. Consults with patients bothered by this condition have been increasing with patients complaining of poor aesthetics and symptoms such as difficulty with vaginal secretions, vulvovaginitis, chronic irritation, and superficial dyspareunia, all of which can have a negative effect on these patients' sexuality and self esteem. Surgical management of labial hypertrophy is an option for women with these physical complaints or aesthetic issues. Labia minora hypertrophy can consist of multiple components, including the clitoral hood, lateral prepuce, frenulum, and the body of the labia minora. To date, there is not a consensus in the literature with respect to the classification and definition of varying grades of hypertrophy, aside from measurement of the length in centimeters. In order to offer patients the most appropriate surgical technique, an objective and understandable classification that can be used as part of the preoperative evaluation is necessary. Such a classification should have the aim of offering patients the best cosmetic and functional results with the fewest complications.

  19. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions

    PubMed Central

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884

  20. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    PubMed

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  1. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  2. Combined flaps based on the superficial temporal vascular system for reconstruction of facial defects.

    PubMed

    Zhou, Renpeng; Wang, Chen; Qian, Yunliang; Wang, Danru

    2015-09-01

    Facial defects are multicomponent deficiencies rather than simple soft-tissue defects. Based on different branches of the superficial temporal vascular system, various tissue components can be obtained to reconstruct facial defects individually. From January 2004 to December 2013, 31 patients underwent reconstruction of facial defects with composite flaps based on the superficial temporal vascular system. Twenty cases of nasal defects were repaired with skin and cartilage components, six cases of facial defects were treated with double island flaps of the skin and fascia, three patients underwent eyebrow and lower eyelid reconstruction with hairy and hairless flaps simultaneously, and two patients underwent soft-tissue repair with auricular combined flaps and cranial bone grafts. All flaps survived completely. Donor-site morbidity is minimal, closed primarily. Donor areas healed with acceptable cosmetic results. The final outcome was satisfactory. Combined flaps based on the superficial temporal vascular system are a useful and versatile option in facial soft-tissue reconstruction. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  3. Facial neuropathy with imaging enhancement of the facial nerve: a case report

    PubMed Central

    Mumtaz, Sehreen; Jensen, Matthew B

    2014-01-01

    A young women developed unilateral facial neuropathy 2 weeks after a motor vehicle collision involving fractures of the skull and mandible. MRI showed contrast enhancement of the facial nerve. We review the literature describing facial neuropathy after trauma and facial nerve enhancement patterns with different causes of facial neuropathy. PMID:25574155

  4. Fifteen years of research on oral-facial-digital syndromes: from 1 to 16 causal genes.

    PubMed

    Bruel, Ange-Line; Franco, Brunella; Duffourd, Yannis; Thevenon, Julien; Jego, Laurence; Lopez, Estelle; Deleuze, Jean-François; Doummar, Diane; Giles, Rachel H; Johnson, Colin A; Huynen, Martijn A; Chevrier, Véronique; Burglen, Lydie; Morleo, Manuela; Desguerres, Isabelle; Pierquin, Geneviève; Doray, Bérénice; Gilbert-Dussardier, Brigitte; Reversade, Bruno; Steichen-Gersdorf, Elisabeth; Baumann, Clarisse; Panigrahi, Inusha; Fargeot-Espaliat, Anne; Dieux, Anne; David, Albert; Goldenberg, Alice; Bongers, Ernie; Gaillard, Dominique; Argente, Jesús; Aral, Bernard; Gigot, Nadège; St-Onge, Judith; Birnbaum, Daniel; Phadke, Shubha R; Cormier-Daire, Valérie; Eguether, Thibaut; Pazour, Gregory J; Herranz-Pérez, Vicente; Goldstein, Jaclyn S; Pasquier, Laurent; Loget, Philippe; Saunier, Sophie; Mégarbané, André; Rosnet, Olivier; Leroux, Michel R; Wallingford, John B; Blacque, Oliver E; Nachury, Maxence V; Attie-Bitach, Tania; Rivière, Jean-Baptiste; Faivre, Laurence; Thauvin-Robinet, Christel

    2017-06-01

    Oral-facial-digital syndromes (OFDS) gather rare genetic disorders characterised by facial, oral and digital abnormalities associated with a wide range of additional features (polycystic kidney disease, cerebral malformations and several others) to delineate a growing list of OFDS subtypes. The most frequent, OFD type I, is caused by a heterozygous mutation in the OFD1 gene encoding a centrosomal protein. The wide clinical heterogeneity of OFDS suggests the involvement of other ciliary genes. For 15 years, we have aimed to identify the molecular bases of OFDS. This effort has been greatly helped by the recent development of whole-exome sequencing (WES). Here, we present all our published and unpublished results for WES in 24 cases with OFDS. We identified causal variants in five new genes ( C2CD3 , TMEM107 , INTU , KIAA0753 and IFT57 ) and related the clinical spectrum of four genes in other ciliopathies ( C5orf42 , TMEM138 , TMEM231 and WDPCP ) to OFDS. Mutations were also detected in two genes previously implicated in OFDS. Functional studies revealed the involvement of centriole elongation, transition zone and intraflagellar transport defects in OFDS, thus characterising three ciliary protein modules: the complex KIAA0753-FOPNL-OFD1, a regulator of centriole elongation; the Meckel-Gruber syndrome module, a major component of the transition zone; and the CPLANE complex necessary for IFT-A assembly. OFDS now appear to be a distinct subgroup of ciliopathies with wide heterogeneity, which makes the initial classification obsolete. A clinical classification restricted to the three frequent/well-delineated subtypes could be proposed, and for patients who do not fit one of these three main subtypes, a further classification could be based on the genotype. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Measuring Facial Movement

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1976-01-01

    The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)

  6. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  7. Effect of a Facial Muscle Exercise Device on Facial Rejuvenation

    PubMed Central

    Hwang, Ui-jae; Kwon, Oh-yun; Jung, Sung-hoon; Ahn, Sun-hee; Gwak, Gyeong-tae

    2018-01-01

    Abstract Background The efficacy of facial muscle exercises (FMEs) for facial rejuvenation is controversial. In the majority of previous studies, nonquantitative assessment tools were used to assess the benefits of FMEs. Objectives This study examined the effectiveness of FMEs using a Pao (MTG, Nagoya, Japan) device to quantify facial rejuvenation. Methods Fifty females were asked to perform FMEs using a Pao device for 30 seconds twice a day for 8 weeks. Facial muscle thickness and cross-sectional area were measured sonographically. Facial surface distance, surface area, and volumes were determined using a laser scanning system before and after FME. Facial muscle thickness, cross-sectional area, midfacial surface distances, jawline surface distance, and lower facial surface area and volume were compared bilaterally before and after FME using a paired Student t test. Results The cross-sectional areas of the zygomaticus major and digastric muscles increased significantly (right: P < 0.001, left: P = 0.015), while the midfacial surface distances in the middle (right: P = 0.005, left: P = 0.047) and lower (right: P = 0.028, left: P = 0.019) planes as well as the jawline surface distances (right: P = 0.004, left: P = 0.003) decreased significantly after FME using the Pao device. The lower facial surface areas (right: P = 0.005, left: P = 0.006) and volumes (right: P = 0.001, left: P = 0.002) were also significantly reduced after FME using the Pao device. Conclusions FME using the Pao device can increase facial muscle thickness and cross-sectional area, thus contributing to facial rejuvenation. Level of Evidence: 4 PMID:29365050

  8. Morcellized Omental Transfer for Severe HIV Facial Wasting

    PubMed Central

    Bohorquez, Marlon; Podbielski, Francis J.

    2013-01-01

    Background: A novel surgical technique to reconstruct facial wasting was developed for patients with severe human immunodeficiency virus lipoatrophy and no source of subcutaneous fat for donor material. Fourteen patients underwent endoscopic harvest of omentum, extracorporeal morcellation, and autologous transfer to the face. Methods: Omental fat was harvested using a standard 3-port laparoscopic technique. A mechanical tissue processor created morsels suitable for transfer. Gold-plated, multi-holed catheters delivered living particulate fat to the subcutaneous planes of the buccal, malar, lateral cheek, and temporal regions. Results were evaluated using standardized pre- and postoperative photographs for specific anatomic criteria found along the typical progression of the disease process. Results: Electron microscopy confirmed that morcellized fat retained intact cell walls and was appropriate for autologous transfer. Complications were minor and transient. Patients were discharged home within 24 hours. No patient required open laparotomy. Survival of the adipose grafts was deemed good to excellent in 13 of the 14 cases. Conclusions: Mechanically morcellized omental fat transfer provides a safe option to restore facial volume in those unusual patients with severe wasting and no available subcutaneous tissue for transfer. Consistent anatomic progression of facial wasting permits preoperative classification, counseling of patients, and postoperative evaluation of surgical improvement. PMID:25289268

  9. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  10. Home-use TriPollar RF device for facial skin tightening: Clinical study results.

    PubMed

    Beilin, Ghislaine

    2011-04-01

    Professional, non-invasive, anti-aging treatments based on radio-frequency (RF) technologies are popular for skin tightening and improvement of wrinkles. A new home-use RF device for facial treatments has recently been developed based on TriPollar™ technology. To evaluate the STOP™ home-use device for facial skin tightening using objective and subjective methods. Twenty-three female subjects used the STOP at home for a period of 6 weeks followed by a maintenance period of 6 weeks. Facial skin characteristics were objectively evaluated at baseline and at the end of the treatment and maintenance periods using a three-dimensional imaging system. Additionally, facial wrinkles were classified and subjects scored their satisfaction and sensations. Following STOP treatment, a statistically significant reduction of perioral and periorbital wrinkles was achieved in 90% and 95% of the patients, respectively, with an average periorbital wrinkle reduction of 41%. This objective result correlated well with the periorbital wrinkle classification result of 40%. All patients were satisfied to extremely satisfied with the treatments and all reported moderate to excellent visible results. The clinical study demonstrated the safety and efficacy of the STOP home-use device for facial skin tightening. Treatment can maintain a tighter and suppler skin with improvement of fine lines and wrinkles.

  11. Facial nerve palsy: analysis of cases reported in children in a suburban hospital in Nigeria.

    PubMed

    Folayan, M O; Arobieke, R I; Eziyi, E; Oyetola, E O; Elusiyan, J

    2014-01-01

    The study describes the epidemiology, treatment, and treatment outcomes of the 10 cases of facial nerve palsy seen in children managed at the Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife over a 10 year period. It also compares findings with report from developed countries. This was a retrospective cohort review of pediatric cases of facial nerve palsy encountered in all the clinics run by specialists in the above named hospital. A diagnosis of facial palsy was based on International Classification of Diseases, Ninth Revision, Clinical Modification codes. Information retrieved from the case note included sex, age, number of days with lesion prior to presentation in the clinic, diagnosis, treatment, treatment outcome, and referral clinic. Only 10 cases of facial nerve palsy were diagnosed in the institution during the study period. Prevalence of facial nerve palsy in this hospital was 0.01%. The lesion more commonly affected males and the right side of the face. All cases were associated with infections: Mainly mumps (70% of cases). Case management include the use of steroids and eye pads for cases that presented within 7 days; and steroids, eye pad, and physical therapy for cases that presented later. All cases of facial nerve palsy associated with mumps and malaria infection fully recovered. The two cases of facial nerve palsy associated with otitis media only partially recovered. Facial nerve palsy in pediatric patients is more commonly associated with mumps in the study environment. Successes are recorded with steroid therapy.

  12. Facial Fractures.

    PubMed

    Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy

    2018-06-01

    The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.

  13. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography

    PubMed Central

    Scherer, Klaus R.; Schuller, Björn W.

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks—novelty, intrinsic pleasantness, goal conduciveness, control, and power—in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of

  14. Evidence of emotion-antecedent appraisal checks in electroencephalography and facial electromyography.

    PubMed

    Coutinho, Eduardo; Gentsch, Kornelia; van Peer, Jacobien; Scherer, Klaus R; Schuller, Björn W

    2018-01-01

    In the present study, we applied Machine Learning (ML) methods to identify psychobiological markers of cognitive processes involved in the process of emotion elicitation as postulated by the Component Process Model (CPM). In particular, we focused on the automatic detection of five appraisal checks-novelty, intrinsic pleasantness, goal conduciveness, control, and power-in electroencephalography (EEG) and facial electromyography (EMG) signals. We also evaluated the effects on classification accuracy of averaging the raw physiological signals over different numbers of trials, and whether the use of minimal sets of EEG channels localized over specific scalp regions of interest are sufficient to discriminate between appraisal checks. We demonstrated the effectiveness of our approach on two data sets obtained from previous studies. Our results show that novelty and power appraisal checks can be consistently detected in EEG signals above chance level (binary tasks). For novelty, the best classification performance in terms of accuracy was achieved using features extracted from the whole scalp, and by averaging across 20 individual trials in the same experimental condition (UAR = 83.5 ± 4.2; N = 25). For power, the best performance was obtained by using the signals from four pre-selected EEG channels averaged across all trials available for each participant (UAR = 70.6 ± 5.3; N = 24). Together, our results indicate that accurate classification can be achieved with a relatively small number of trials and channels, but that averaging across a larger number of individual trials is beneficial for the classification for both appraisal checks. We were not able to detect any evidence of the appraisal checks under study in the EMG data. The proposed methodology is a promising tool for the study of the psychophysiological mechanisms underlying emotional episodes, and their application to the development of computerized tools (e.g., Brain-Computer Interface) for the study of

  15. A system for automatic artifact removal in ictal scalp EEG based on independent component analysis and Bayesian classification.

    PubMed

    LeVan, P; Urrestarazu, E; Gotman, J

    2006-04-01

    To devise an automated system to remove artifacts from ictal scalp EEG, using independent component analysis (ICA). A Bayesian classifier was used to determine the probability that 2s epochs of seizure segments decomposed by ICA represented EEG activity, as opposed to artifact. The classifier was trained using numerous statistical, spectral, and spatial features. The system's performance was then assessed using separate validation data. The classifier identified epochs representing EEG activity in the validation dataset with a sensitivity of 82.4% and a specificity of 83.3%. An ICA component was considered to represent EEG activity if the sum of the probabilities that its epochs represented EEG exceeded a threshold predetermined using the training data. Otherwise, the component represented artifact. Using this threshold on the validation set, the identification of EEG components was performed with a sensitivity of 87.6% and a specificity of 70.2%. Most misclassified components were a mixture of EEG and artifactual activity. The automated system successfully rejected a good proportion of artifactual components extracted by ICA, while preserving almost all EEG components. The misclassification rate was comparable to the variability observed in human classification. Current ICA methods of artifact removal require a tedious visual classification of the components. The proposed system automates this process and removes simultaneously multiple types of artifacts.

  16. The facial nerve: anatomy and associated disorders for oral health professionals.

    PubMed

    Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir

    2018-04-01

    The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.

  17. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  18. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  19. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  20. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery

  1. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    PubMed

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  2. Body size and allometric variation in facial shape in children.

    PubMed

    Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt

    2018-02-01

    Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.

  3. Computerized measurement of facial expression of emotions in schizophrenia.

    PubMed

    Alvino, Christopher; Kohler, Christian; Barrett, Frederick; Gur, Raquel E; Gur, Ruben C; Verma, Ragini

    2007-07-30

    Deficits in the ability to express emotions characterize several neuropsychiatric disorders and are a hallmark of schizophrenia, and there is need for a method of quantifying expression, which is currently done by clinical ratings. This paper presents the development and validation of a computational framework for quantifying emotional expression differences between patients with schizophrenia and healthy controls. Each face is modeled as a combination of elastic regions, and expression changes are modeled as a deformation between a neutral face and an expressive face. Functions of these deformations, known as the regional volumetric difference (RVD) functions, form distinctive quantitative profiles of expressions. Employing pattern classification techniques, we have designed expression classifiers for the four universal emotions of happiness, sadness, anger and fear by training on RVD functions of expression changes. The classifiers were cross-validated and then applied to facial expression images of patients with schizophrenia and healthy controls. The classification score for each image reflects the extent to which the expressed emotion matches the intended emotion. Group-wise statistical analysis revealed this score to be significantly different between healthy controls and patients, especially in the case of anger. This score correlated with clinical severity of flat affect. These results encourage the use of such deformation based expression quantification measures for research in clinical applications that require the automated measurement of facial affect.

  4. Non-invasive health status detection system using Gabor filters based on facial block texture features.

    PubMed

    Shu, Ting; Zhang, Bob

    2015-04-01

    Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.

  5. Genetic Factors That Increase Male Facial Masculinity Decrease Facial Attractiveness of Female Relatives

    PubMed Central

    Lee, Anthony J.; Mitchem, Dorian G.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.

    2014-01-01

    For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework. PMID:24379153

  6. Genetic factors that increase male facial masculinity decrease facial attractiveness of female relatives.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2014-02-01

    For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework.

  7. [Facial nerve neurinomas].

    PubMed

    Sokołowski, Jacek; Bartoszewicz, Robert; Morawski, Krzysztof; Jamróz, Barbara; Niemczyk, Kazimierz

    2013-01-01

    Evaluation of diagnostic, surgical technique, treatment results facial nerve neurinomas and its comparison with literature was the main purpose of this study. Seven cases of patients (2005-2011) with facial nerve schwannomas were included to retrospective analysis in the Department of Otolaryngology, Medical University of Warsaw. All patients were assessed with history of the disease, physical examination, hearing tests, computed tomography and/or magnetic resonance imaging, electronystagmography. Cases were observed in the direction of potential complications and recurrences. Neurinoma of the facial nerve occurred in the vertical segment (n=2), facial nerve geniculum (n=1) and the internal auditory canal (n=4). The symptoms observed in patients were analyzed: facial nerve paresis (n=3), hearing loss (n=2), dizziness (n=1). Magnetic resonance imaging and computed tomography allowed to confirm the presence of the tumor and to assess its staging. Schwannoma of the facial nerve has been surgically removed using the middle fossa approach (n=5) and by antromastoidectomy (n=2). Anatomical continuity of the facial nerve was achieved in 3 cases. In the twelve months after surgery, facial nerve paresis was rated at level II-III° HB. There was no recurrence of the tumor in radiological observation. Facial nerve neurinoma is a rare tumor. Currently surgical techniques allow in most cases, the radical removing of the lesion and reconstruction of the VII nerve function. The rate of recurrence is low. A tumor of the facial nerve should be considered in the differential diagnosis of nerve VII paresis. Copyright © 2013 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  8. Contralateral botulinum toxin injection to improve facial asymmetry after acute facial paralysis.

    PubMed

    Kim, Jin

    2013-02-01

    The application of botulinum toxin to the healthy side of the face in patients with long-standing facial paralysis has been shown to be a minimally invasive technique that improves facial symmetry at rest and during facial motion, but our experience using botulinum toxin therapy for facial sequelae prompted the idea that botulinum toxin might be useful in acute cases of facial paralysis, leading to improve facial asymmetry. In cases in which medical or surgical treatment options are limited because of existing medical problems or advanced age, most patients with acute facial palsy are advised to await spontaneous recovery or are informed that no effective intervention exists. The purpose of this study was to evaluate the effect of botulinum toxin treatment for facial asymmetry in 18 patients after acute facial palsy who could not be optimally treated by medical or surgical management because of severe medical or other problems. From 2009 to 2011, nine patients with Bell's palsy, 5 with herpes zoster oticus and 4 with traumatic facial palsy (10 men and 8 women; age range, 22-82 yr; mean, 50.8 yr) participated in this study. Botulinum toxin A (Botox; Allergan Incorporated, Irvine, CA, USA) was injected using a tuberculin syringe with a 27-gauge needle. The amount injected per site varied from 2.5 to 3 U, and the total dose used per patient was 32 to 68 U (mean, 47.5 +/- 8.4 U). After administration of a single dose of botulinum toxin A on the nonparalyzed side of 18 patients with acute facial paralysis, marked relief of facial asymmetry was observed in 8 patients within 1 month of injection. Decreased facial asymmetry and strengthened facial function on the paralyzed side led to an increased HB and SB grade within 6 months after injection. Use of botulinum toxin after acute facial palsy cases is of great value. Such therapy decreases the relative hyperkinesis contralateral to the paralysis, leading to greater symmetric function. Especially in patients with medical

  9. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  10. Quantitative facial asymmetry: using three-dimensional photogrammetry to measure baseline facial surface symmetry.

    PubMed

    Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R

    2014-01-01

    Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed

  11. Long-Face Dentofacial Deformities: Occlusion and Facial Esthetic Surgical Outcomes.

    PubMed

    Posnick, Jeffrey C; Liu, Samuel; Tremont, Timothy J

    2018-06-01

    The purpose of this study was to document malocclusion and facial dysmorphology in a series of patients with long face (LF) and chronic obstructive nasal breathing before treatment and the outcomes after bimaxillary orthognathic, osseous genioplasty, and intranasal surgery. A retrospective cohort study of patients with LF undergoing bimaxillary, chin, and intranasal (septoplasty and inferior turbinate reduction) surgery was implemented. Predictor variables were grouped into demographic, anatomic, operative, and longitudinal follow-up categories. Primary outcome variables were the initial postoperative occlusion achieved (T 2 ; 5 weeks after surgery) and the occulsion maintained long-term (>2 years after surgery). Six key occlusion parameters were assessed: overjet, overbite, coincidence of dental midlines, canine Angle classification, and molar vertical and transverse positions. The second outcome variable was the facial esthetic results. Photographs in 6 views were analyzed to document 7 facial contour characteristics. Seventy-eight patients met the inclusion criteria. Average age at surgery was 24 years (range, 13 to 54 yr). The study included 53 female patients (68%). Findings confirmed that occlusion after initial surgical healing (T 2 ) met the objectives for all parameters in 97% of patients (76 of 78). Most (68 of 78; 87%) maintained a favorable anterior and posterior occlusion for each parameter studied long-term (mean, 5 years 5 months). Facial contour deformities at presentation included prominent nose (63%), flat cheekbones (96%), flat midface (96%), weak chin (91%), obtuse neck-to-chin angle (56%), wide lip separation (95%), and excess maxillary dental show (99%). Correction of all pretreatment facial contour deformities was confirmed in 92% of patients after surgery. Long face patients with higher preoperative body mass index levels were more likely to have residual facial dysmorphology after surgery (P = .0009). Using orthognathic surgery

  12. Outcome of facial physiotherapy in patients with prolonged idiopathic facial palsy.

    PubMed

    Watson, G J; Glover, S; Allen, S; Irving, R M

    2015-04-01

    This study investigated whether patients who remain symptomatic more than a year following idiopathic facial paralysis gain benefit from tailored facial physiotherapy. A two-year retrospective review was conducted of all symptomatic patients. Data collected included: age, gender, duration of symptoms, Sunnybrook facial grading system scores pre-treatment and at last visit, and duration of treatment. The study comprised 22 patients (with a mean age of 50.5 years (range, 22-75 years)) who had been symptomatic for more than a year following idiopathic facial paralysis. The mean duration of symptoms was 45 months (range, 12-240 months). The mean duration of follow up was 10.4 months (range, 2-36 months). Prior to treatment, the mean Sunnybrook facial grading system score was 59 (standard deviation = 3.5); this had increased to 83 (standard deviation = 2.7) at the last visit, with an average improvement in score of 23 (standard deviation = 2.9). This increase was significant (p < 0.001). Tailored facial therapy can improve facial grading scores in patients who remain symptomatic for prolonged periods.

  13. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  14. NOTE: Entropy-based automated classification of independent components separated from fMCG

    NASA Astrophysics Data System (ADS)

    Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.

    2007-03-01

    Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.

  15. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    PubMed

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  16. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  17. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    PubMed

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  18. Facial trauma.

    PubMed

    Peeters, N; Lemkens, P; Leach, R; Gemels B; Schepers, S; Lemmens, W

    Facial trauma. Patients with facial trauma must be assessed in a systematic way so as to avoid missing any injury. Severe and disfiguring facial injuries can be distracting. However, clinicians must first focus on the basics of trauma care, following the Advanced Trauma Life Support (ATLS) system of care. Maxillofacial trauma occurs in a significant number of severely injured patients. Life- and sight-threatening injuries must be excluded during the primary and secondary surveys. Special attention must be paid to sight-threatening injuries in stabilized patients through early referral to an appropriate specialist or the early initiation of emergency care treatment. The gold standard for the radiographic evaluation of facial injuries is computed tomography (CT) imaging. Nasal fractures are the most frequent isolated facial fractures. Isolated nasal fractures are principally diagnosed through history and clinical examination. Closed reduction is the most frequently performed treatment for isolated nasal fractures, with a fractured nasal septum as a predictor of failure. Ear, nose and throat surgeons, maxillofacial surgeons and ophthalmologists must all develop an adequate treatment plan for patients with complex maxillofacial trauma.

  19. Convolutional neural networks with balanced batches for facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  20. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  1. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    PubMed

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  2. The importance of skin color and facial structure in perceiving and remembering others: an electrophysiological study.

    PubMed

    Brebner, Joanne L; Krigolson, Olav; Handy, Todd C; Quadflieg, Susanne; Turk, David J

    2011-05-04

    The own-race bias (ORB) is a well-documented recognition advantage for own-race (OR) over cross-race (CR) faces, the origin of which remains unclear. In the current study, event-related potentials (ERPs) were recorded while Caucasian participants age-categorized Black and White faces which were digitally altered to display either a race congruent or incongruent facial structure. The results of a subsequent surprise memory test indicated that regardless of facial structure participants recognized White faces better than Black faces. Additional analyses revealed that temporally-early ERP components associated with face-specific perceptual processing (N170) and the individuation of facial exemplars (N250) were selectively sensitive to skin color. In addition, the N200 (a component that has been linked to increased attention and depth of encoding afforded to in-group and OR faces) was modulated by color and structure, and correlated with subsequent memory performance. However, the LPP component associated with the cognitive evaluation of perceptual input was influenced by racial differences in facial structure alone. These findings suggest that racial differences in skin color and facial structure are detected during the encoding of unfamiliar faces, and that the categorization of conspecifics as members of our social in-group on the basis of their skin color may be a determining factor in our ability to subsequently remember them. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. [Emotion Recognition in Patients with Peripheral Facial Paralysis - A Pilot Study].

    PubMed

    Konnerth, V; Mohr, G; von Piekartz, H

    2016-02-01

    The perception of emotions is an important component in enabling human beings to social interaction in everyday life. Thus, the ability to recognize the emotions of the other one's mime is a key prerequisite for this. The following study aimed at evaluating the ability of subjects with 'peripheral facial paresis' to perceive emotions in healthy individuals. A pilot study was conducted in which 13 people with 'peripheral facial paresis' participated. This assessment included the 'Facially Expressed Emotion Labeling-Test' (FEEL-Test), the 'Facial-Laterality-Recognition Test' (FLR-Test) and the 'Toronto-Alexithymie-Scale 26' (TAS 26). The results were compared with data of healthy people from other studies. In contrast to healthy patients, the subjects with 'facial paresis' show more difficulties in recognizing basic emotions; however the results are not significant. The participants show a significant lower level of speed (right/left: p<0.001) concerning the perception of facial laterality compared to healthy people. With regard to the alexithymia, the tested group reveals significantly higher results (p<0.001) compared to the unimpaired people. The present pilot study does not prove any impact on this specific patient group's ability to recognize emotions and facial laterality. For future studies the research question should be verified in a larger sample size. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  5. Association Among Facial Paralysis, Depression, and Quality of Life in Facial Plastic Surgery Patients

    PubMed Central

    Nellis, Jason C.; Ishii, Masaru; Byrne, Patrick J.; Boahene, Kofi D. O.; Dey, Jacob K.; Ishii, Lisa E.

    2017-01-01

    IMPORTANCE Though anecdotally linked, few studies have investigated the impact of facial paralysis on depression and quality of life (QOL). OBJECTIVE To measure the association between depression, QOL, and facial paralysis in patients seeking treatment at a facial plastic surgery clinic. DESIGN, SETTING, PARTICIPANTS Data were prospectively collected for patients with all-cause facial paralysis and control patients initially presenting to a facial plastic surgery clinic from 2013 to 2015. The control group included a heterogeneous patient population presenting to facial plastic surgery clinic for evaluation. Patients who had prior facial reanimation surgery or missing demographic and psychometric data were excluded from analysis. MAIN OUTCOMES AND MEASURES Demographics, facial paralysis etiology, facial paralysis severity (graded on the House-Brackmann scale), Beck depression inventory, and QOL scores in both groups were examined. Potential confounders, including self-reported attractiveness and mood, were collected and analyzed. Self-reported scores were measured using a 0 to 100 visual analog scale. RESULTS There was a total of 263 patients (mean age, 48.8 years; 66.9% were female) were analyzed. There were 175 control patients and 88 patients with facial paralysis. Sex distributions were not significantly different between the facial paralysis and control groups. Patients with facial paralysis had significantly higher depression, lower self-reported attractiveness, lower mood, and lower QOL scores. Overall, 37 patients with facial paralysis (42.1%) screened positive for depression, with the greatest likelihood in patients with House-Brackmann grade 3 or greater (odds ratio, 10.8; 95% CI, 5.13–22.75) compared with 13 control patients (8.1%) (P < .001). In multivariate regression, facial paralysis and female sex were significantly associated with higher depression scores (constant, 2.08 [95% CI, 0.77–3.39]; facial paralysis effect, 5.98 [95% CI, 4.38–7

  6. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    PubMed

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on

  7. Channels selection using independent component analysis and scalp map projection for EEG-based driver fatigue classification.

    PubMed

    Rifai Chai; Naik, Ganesh R; Sai Ho Ling; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T

    2017-07-01

    This paper presents a classification of driver fatigue with electroencephalography (EEG) channels selection analysis. The system employs independent component analysis (ICA) with scalp map back projection to select the dominant of EEG channels. After channel selection, the features of the selected EEG channels were extracted based on power spectral density (PSD), and then classified using a Bayesian neural network. The results of the ICA decomposition with the back-projected scalp map and a threshold showed that the EEG channels can be reduced from 32 channels into 16 dominants channels involved in fatigue assessment as chosen channels, which included AF3, F3, FC1, FC5, T7, CP5, P3, O1, P4, P8, CP6, T8, FC2, F8, AF4, FP2. The result of fatigue vs. alert classification of the selected 16 channels yielded a sensitivity of 76.8%, specificity of 74.3% and an accuracy of 75.5%. Also, the classification results of the selected 16 channels are comparable to those using the original 32 channels. So, the selected 16 channels is preferable for ergonomics improvement of EEG-based fatigue classification system.

  8. Facial fractures in children.

    PubMed

    Boyette, Jennings R

    2014-10-01

    Facial trauma in children differs from adults. The growing facial skeleton presents several challenges to the reconstructive surgeon. A thorough understanding of the patterns of facial growth and development is needed to form an individualized treatment strategy. A proper diagnosis must be made and treatment options weighed against the risk of causing further harm to facial development. This article focuses on the management of facial fractures in children. Discussed are common fracture patterns based on the development of the facial structure, initial management, diagnostic strategies, new concepts and old controversies regarding radiologic examinations, conservative versus operative intervention, risks of growth impairment, and resorbable fixation. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary

  10. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  12. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  13. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  14. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  15. Chronic, burning facial pain following cosmetic facial surgery.

    PubMed

    Eisenberg, E; Yaari, A; Har-Shai, Y

    1996-01-01

    Chronic, burning facial pain as a result of cosmetic facial surgery has rarely been reported. During the year of 1994, two female patients presented themselves at our Pain Relief Clinic with chronic facial pain that developed following aesthetic facial surgery. One patient underwent bilateral transpalpebral surgery for removal of intraorbital fat for the correction of the exophthalmus, and the other had classical face and anterior hairline forehead lifts. Pain in both patients was similar in that it was bilateral, symmetric, burning in quality, and aggravated by external stimuli, mainly light touch. It was resistant to multiple analgesic medications, and was associated with significant depression and disability. Diagnostic local (lidocaine) and systemic (lidocaine and phentolamine) nerve blocks failed to provide relief. Psychological evaluation revealed that the two patients had clear psychosocial factors that seemed to have further compounded their pain complaints. Tricyclic antidepressants (and biofeedback training in one patient) were modestly effective and produced only partial pain relief.

  16. Facial anatomy.

    PubMed

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  17. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  18. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness.

    PubMed

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness.

  19. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness

    PubMed Central

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness. PMID:27148111

  20. Development and validation of a facial expression database based on the dimensional and categorical model of emotions.

    PubMed

    Fujimura, Tomomi; Umemura, Hiroyuki

    2018-01-15

    The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .

  1. Facial anthropometric differences among gender, ethnicity, and age groups.

    PubMed

    Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald

    2010-06-01

    The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational

  2. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  3. Facial Fractures: Pearls and Perspectives.

    PubMed

    Chaudhry, Obaid; Isakson, Matthew; Franklin, Adam; Maqusi, Suhair; El Amm, Christian

    2018-05-01

    After studying this article, the participant should be able to: 1. Describe the A-frame configuration of anterior facial buttresses, recognize the importance of restoring anterior projection in frontal sinus fractures, and describe an alternative design and donor site of pericranial flaps in frontal sinus fractures. 2. Describe the symptoms and cause of pseudo-Brown syndrome, describe the anatomy and placement of a buttress-spanning plate in nasoorbitoethmoid fractures, and identify appropriate nasal support alternatives for nasoorbitoethmoid fractures. 3. Describe the benefits and disadvantages of different lower lid approaches to the orbital floor and inferior rim, identify late exophthalmos as a complication of reconstructing the orbital floor with nonporous alloplast, and select implant type and size for correction of secondary enophthalmos. 4. Describe closed reduction of low-energy zygomatic body fractures with the Gillies approach and identify situations where internal fixation may be unnecessary, identify situations where plating the inferior orbital rim may be avoided, and select fixation points for osteosynthesis of uncomplicated displaced zygomatic fractures. 5. Understand indications and complications of use for intermaxillary screw systems, understand sequencing panfacial fractures, describe the sulcular approach to mandible fractures, and describe principles and techniques of facial reconstruction after self-inflicted firearm injuries. Treating patients with facial trauma remains a core component of plastic surgery and a significant part of the value of a plastic surgeon to a health system.

  4. [Peripheral facial nerve lesion induced long-term dendritic retraction in pyramidal cortico-facial neurons].

    PubMed

    Urrego, Diana; Múnera, Alejandro; Troncoso, Julieta

    2011-01-01

    Little evidence is available concerning the morphological modifications of motor cortex neurons associated with peripheral nerve injuries, and the consequences of those injuries on post lesion functional recovery. Dendritic branching of cortico-facial neurons was characterized with respect to the effects of irreversible facial nerve injury. Twenty-four adult male rats were distributed into four groups: sham (no lesion surgery), and dendritic assessment at 1, 3 and 5 weeks post surgery. Eighteen lesion animals underwent surgical transection of the mandibular and buccal branches of the facial nerve. Dendritic branching was examined by contralateral primary motor cortex slices stained with the Golgi-Cox technique. Layer V pyramidal (cortico-facial) neurons from sham and injured animals were reconstructed and their dendritic branching was compared using Sholl analysis. Animals with facial nerve lesions displayed persistent vibrissal paralysis throughout the five week observation period. Compared with control animal neurons, cortico-facial pyramidal neurons of surgically injured animals displayed shrinkage of their dendritic branches at statistically significant levels. This shrinkage persisted for at least five weeks after facial nerve injury. Irreversible facial motoneuron axonal damage induced persistent dendritic arborization shrinkage in contralateral cortico-facial neurons. This morphological reorganization may be the physiological basis of functional sequelae observed in peripheral facial palsy patients.

  5. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces - pain vigilance, pain-related catastrophizing, fear of pain--were assessed using self-report questionnaires. Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues.

  6. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    PubMed

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  7. Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions.

    PubMed

    Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori

    2013-11-01

    This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.

  8. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry

    PubMed Central

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won

    2017-01-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries. PMID:28523246

  9. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry.

    PubMed

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-05-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries.

  10. Gender, age, and psychosocial context of the perception of facial esthetics.

    PubMed

    Tole, Nikoleta; Lajnert, Vlatka; Kovacevic Pavicic, Daniela; Spalj, Stjepan

    2014-01-01

    To explore the effects of gender, age, and psychosocial context on the perception of facial esthetics. The study included 1,444 Caucasian subjects aged 16 to 85 years. Two sets of color photographs illustrating 13 male and 13 female Caucasian facial type alterations, representing different skeletal and dentoalveolar components of sagittal maxillary-mandibular relationships, were used to estimate the facial profile attractiveness. The examinees graded the profiles based on a 0 to 10 numerical rating scale. The examinees graded the profiles of their own sex only from a social perspective, whereas opposite sex profiles were graded both from the social and emotional perspective separately. The perception of facial esthetics was found to be related to the gender, age, and psychosocial context of evaluation (p < 0.05). The most attractive profiles to men are the orthognathic female profile from the social perspective and the moderate bialveolar protrusion from the emotional perspective. The most attractive profile to women is the orthognathic male profile, when graded from the social aspect, and the mild bialveolar retrusion when graded from the emotional aspect. The age increase of the assessor results in a higher attractiveness grade. When planning treatment that modifies the facial profile, the clinician should bear in mind that the perception of facial profile esthetics is a complex phenomenon influenced by biopsychosocial factors. This study allows a better understanding of the concept of perception of facial esthetics that includes gender, age, and psychosocial context. © 2013 Wiley Periodicals, Inc.

  11. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    PubMed

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  12. Association between ratings of facial attractivess and patients' motivation for orthognathic surgery.

    PubMed

    Vargo, J K; Gladwin, M; Ngan, P

    2003-02-01

    To compare the judgments of facial esthetics, defects and treatment needs between laypersons and professionals (orthodontists and oral surgeons) as predictors of patient's motivation for orthognathic surgery. Two panels of expert and naïve raters were asked to evaluate photographs of orthognathic surgery patients for facial esthetics, defects and treatment needs. Results were correlated with patients' motivation for surgery. Fifty-seven patients (37 females and 20 males) with a mean age of 26.0 +/- 6.7 years were interviewed prior to orthognathic surgery treatment. Three color photographs of each patient were evaluated by a panel of 14 experts and panel of 18 laypersons. Each panel of raters were asked to evaluate the facial morphology, facial attractiveness and recommend surgical treatment (independent variables). The dependent variable was the patient's motivation for orthognathic surgery. Outcome measure--Reliability of raters were analyzed using an unweighted Kappa coefficient and a Cronbach alpha coefficient. Correlations and regression analyses were used to quantify the relationship between variables. Expert raters provided reliable ratings of certain morphological features such as excessive gingival display and classification of mandibular facial form and position. Based on the facial photographs both expert and naïve raters agreed on facial attractiveness of patients. The best predictors of patients' motivation for surgery were the naïve profile attractiveness rating and the patients' expected change in self-consciousness. Expert raters provide more reliable ratings on certain morphologic features. However, the layperson's profile attractiveness rating and the patients' expected change in self-consciousness were the best predictors for patients' motivation for surgery. These data suggest that patients' motives for treatment are not necessarily related to objectively determined need. Patients' decision to seek treatment was more correlated to laypersons

  13. Caricaturing facial expressions.

    PubMed

    Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I

    2000-08-14

    The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.

  14. Is moral beauty different from facial beauty? Evidence from an fMRI study.

    PubMed

    Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald

    2015-06-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. Classification of parotidectomies: a proposal of the European Salivary Gland Society.

    PubMed

    Quer, M; Guntinas-Lichius, O; Marchal, F; Vander Poorten, V; Chevalier, D; León, X; Eisele, D; Dulguerov, P

    2016-10-01

    The objective of this study is to provide a comprehensive classification system for parotidectomy operations. Data sources include Medline publications, author's experience, and consensus round table at the Third European Salivary Gland Society (ESGS) Meeting. The Medline database was searched with the term "parotidectomy" and "definition". The various definitions of parotidectomy procedures and parotid gland subdivisions extracted. Previous classification systems re-examined and a new classification proposed by a consensus. The ESGS proposes to subdivide the parotid parenchyma in five levels: I (lateral superior), II (lateral inferior), III (deep inferior), IV (deep superior), V (accessory). A new classification is proposed where the type of resection is divided into formal parotidectomy with facial nerve dissection and extracapsular dissection. Parotidectomies are further classified according to the levels removed, as well as the extra-parotid structures ablated. A new classification of parotidectomy procedures is proposed.

  16. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  17. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    PubMed

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  18. Human Classification Based on Gestural Motions by Using Components of PCA

    NASA Astrophysics Data System (ADS)

    Aziz, Azri A.; Wan, Khairunizam; Za'aba, S. K.; B, Shahriman A.; Adnan, Nazrul H.; H, Asyekin; R, Zuradzman M.

    2013-12-01

    Lately, a study of human capabilities with the aim to be integrated into machine is the famous topic to be discussed. Moreover, human are bless with special abilities that they can hear, see, sense, speak, think and understand each other. Giving such abilities to machine for improvement of human life is researcher's aim for better quality of life in the future. This research was concentrating on human gesture, specifically arm motions for differencing the individuality which lead to the development of the hand gesture database. We try to differentiate the human physical characteristic based on hand gesture represented by arm trajectories. Subjects are selected from different type of the body sizes, and then acquired data undergo resampling process. The results discuss the classification of human based on arm trajectories by using Principle Component Analysis (PCA).

  19. Facial reanimation by muscle-nerve neurotization after facial nerve sacrifice. Case report.

    PubMed

    Taupin, A; Labbé, D; Babin, E; Fromager, G

    2016-12-01

    Recovering a certain degree of mimicry after sacrifice of the facial nerve is a clinically recognized finding. The authors report a case of hemifacial reanimation suggesting a phenomenon of neurotization from muscle-to-nerve. A woman benefited from a parotidectomy with sacrifice of the left facial nerve indicated for recurrent tumor in the gland. The distal branches of the facial nerve, isolated at the time of resection, were buried in the masseter muscle underneath. The patient recovered a voluntary hémifacial motricity. The electromyographic analysis of the motor activity of the zygomaticus major before and after block of the masseter nerve showed a dependence between mimic muscles and the masseter muscle. Several hypotheses have been advanced to explain the spontaneous reanimation of facial paralysis. The clinical case makes it possible to argue in favor of muscle-to-nerve neurotization from masseter muscle to distal branches of the facial nerve. It illustrates the quality of motricity that can be obtained thanks to this procedure. The authors describe a simple implantation technique of distal branches of the facial nerve in the masseter muscle during a radical parotidectomy with facial nerve sacrifice and recovery of resting tone but also a quality voluntary mimicry. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  20. Changing perception: facial reanimation surgery improves attractiveness and decreases negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Lisa E

    2014-01-01

    Determine the effect of facial reanimation surgery on observer-graded attractiveness and negative facial perception of patients with facial paralysis. Randomized controlled experiment. Ninety observers viewed images of paralyzed faces, smiling and in repose, before and after reanimation surgery, as well as normal comparison faces. Observers rated the attractiveness of each face and characterized the paralyzed faces by rating severity, disfigured/bothersome, and importance to repair. Iterated factor analysis indicated these highly correlated variables measure a common domain, so they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score. Mixed effects linear regression determined the effect of facial reanimation surgery on attractiveness and DIBS score. Facial paralysis induces an attractiveness penalty of 2.51 on a 10-point scale for faces in repose and 3.38 for smiling faces. Mixed effects linear regression showed that reanimation surgery improved attractiveness for faces both in repose and smiling by 0.84 (95% confidence interval [CI]: 0.67, 1.01) and 1.24 (95% CI: 1.07, 1.42) respectively. Planned hypothesis tests confirmed statistically significant differences in attractiveness ratings between postoperative and normal faces, indicating attractiveness was not completely normalized. Regression analysis also showed that reanimation surgery decreased DIBS by 0.807 (95% CI: 0.704, 0.911) for faces in repose and 0.989 (95% CI: 0.886, 1.093), an entire standard deviation, for smiling faces. Facial reanimation surgery increases attractiveness and decreases negative facial perception of patients with facial paralysis. These data emphasize the need to optimize reanimation surgery to restore not only function, but also symmetry and cosmesis to improve facial perception and patient quality of life. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  1. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications.

    PubMed

    Parks, Connie L; Monson, Keith L

    2017-04-01

    The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

  2. Driver Fatigue Classification With Independent Component by Entropy Rate Bound Minimization Analysis in an EEG-Based System.

    PubMed

    Chai, Rifai; Naik, Ganesh R; Nguyen, Tuan Nghia; Ling, Sai Ho; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T

    2017-05-01

    This paper presents a two-class electroencephal-ography-based classification for classifying of driver fatigue (fatigue state versus alert state) from 43 healthy participants. The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction, and Bayesian neural network for the classification algorithm. The classification results demonstrate a sensitivity of 89.7%, a specificity of 86.8%, and an accuracy of 88.2%. The combination of ERBM-ICA (source separator), AR (feature extractor), and Bayesian neural network (classifier) provides the best outcome with a p-value < 0.05 with the highest value of area under the receiver operating curve (AUC-ROC = 0.93) against other methods such as power spectral density as feature extractor (AUC-ROC = 0.81). The results of this study suggest the method could be utilized effectively for a countermeasure device for driver fatigue identification and other adverse event applications.

  3. External facial features modify the representation of internal facial features in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2010-08-15

    Most studies of face identity have excluded external facial features by either removing them or covering them with a hat. However, external facial features may modify the representation of internal facial features. Here we assessed whether the representation of face identity in the fusiform face area (FFA), which has been primarily studied for internal facial features, is modified by differences in external facial features. We presented faces in which external and internal facial features were manipulated independently. Our findings show that the FFA was sensitive to differences in external facial features, but this effect was significantly larger when the external and internal features were aligned than misaligned. We conclude that the FFA generates a holistic representation in which the internal and the external facial features are integrated. These results indicate that to better understand real-life face recognition both external and internal features should be included. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  4. Role of electrical stimulation added to conventional therapy in patients with idiopathic facial (Bell) palsy.

    PubMed

    Tuncay, Figen; Borman, Pinar; Taşer, Burcu; Ünlü, İlhan; Samim, Erdal

    2015-03-01

    The aim of this study was to determine the efficacy of electrical stimulation when added to conventional physical therapy with regard to clinical and neurophysiologic changes in patients with Bell palsy. This was a randomized controlled trial. Sixty patients diagnosed with Bell palsy (39 right sided, 21 left sided) were included in the study. Patients were randomly divided into two therapy groups. Group 1 received physical therapy applying hot pack, facial expression exercises, and massage to the facial muscles, whereas group 2 received electrical stimulation treatment in addition to the physical therapy, 5 days per week for a period of 3 wks. Patients were evaluated clinically and electrophysiologically before treatment (at the fourth week of the palsy) and again 3 mos later. Outcome measures included the House-Brackmann scale and Facial Disability Index scores, as well as facial nerve latencies and amplitudes of compound muscle action potentials derived from the frontalis and orbicularis oris muscles. Twenty-nine men (48.3%) and 31 women (51.7%) with Bell palsy were included in the study. In group 1, 16 (57.1%) patients had no axonal degeneration and 12 (42.9%) had axonal degeneration, compared with 17 (53.1%) and 15 (46.9%) patients in group 2, respectively. The baseline House-Brackmann and Facial Disability Index scores were similar between the groups. At 3 mos after onset, the Facial Disability Index scores were improved similarly in both groups. The classification of patients according to House-Brackmann scale revealed greater improvement in group 2 than in group 1. The mean motor nerve latencies and compound muscle action potential amplitudes of both facial muscles were statistically shorter in group 2, whereas only the mean motor latency of the frontalis muscle decreased in group 1. The addition of 3 wks of daily electrical stimulation shortly after facial palsy onset (4 wks), improved functional facial movements and electrophysiologic outcome measures at

  5. Does vigilance to pain make individuals experts in facial recognition of pain?

    PubMed Central

    Baum, Corinna; Kappesser, Judith; Schneider, Raphaela; Lautenbacher, Stefan

    2013-01-01

    BACKGROUND: It is well known that individual factors are important in the facial recognition of pain. However, it is unclear whether vigilance to pain as a pain-related attentional mechanism is among these relevant factors. OBJECTIVES: Vigilance to pain may have two different effects on the recognition of facial pain expressions: pain-vigilant individuals may detect pain faces better but overinclude other facial displays, misinterpreting them as expressing pain; or they may be true experts in discriminating between pain and other facial expressions. The present study aimed to test these two hypotheses. Furthermore, pain vigilance was assumed to be a distinct predictor, the impact of which on recognition cannot be completely replaced by related concepts such as pain catastrophizing and fear of pain. METHODS: Photographs of neutral, happy, angry and pain facial expressions were presented to 40 healthy participants, who were asked to classify them into the appropriate emotion categories and provide a confidence rating for each classification. Additionally, potential predictors of the discrimination performance for pain and anger faces – pain vigilance, pain-related catastrophizing, fear of pain – were assessed using self-report questionnaires. RESULTS: Pain-vigilant participants classified pain faces more accurately and did not misclassify anger as pain faces more frequently. However, vigilance to pain was not related to the confidence of recognition ratings. Pain catastrophizing and fear of pain did not account for the recognition performance. CONCLUSIONS: Moderate pain vigilance, as assessed in the present study, appears to be associated with appropriate detection of pain-related cues and not necessarily with the overinclusion of other negative cues. PMID:23717826

  6. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  7. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  8. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  9. [The application of facial liposuction and fat grafting in the remodeling of facial contour].

    PubMed

    Wen, Huicai; Ma, Li; Sui, Ynnpeng; Jian, Xueping

    2015-03-01

    To investigate the application of facial liposuction and fat grafting in the remodeling of facial contour. From Nov. 2008 to Mar. 2014, 49 cases received facial liposuction and fat grafting to improve facial contours. Subcutaneous facial liposuction with tumescent technique and chin fat grafting were performed in all the cases, buccal fat pad excision of fat in 7 cases, the masseter injection of botulinum toxin type A in 9 cases, temporal fat grafting in 25 cases, forehead fat grafting in 15 cases. Marked improvement was achieved in all the patients with stable results during the follow-up period of 6 - 24 months. Complications, such as asymmetric, unsmooth and sagging were retreated with acceptance results. Combination application of liposuction and fat grafting can effectively and easily improve the facial contour with low risk.

  10. Facial Orientation and Facial Shape in Extant Great Apes: A Geometric Morphometric Analysis of Covariation

    PubMed Central

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees. PMID:23441232

  11. Facial diplegia: a clinical dilemma.

    PubMed

    Chakrabarti, Debaprasad; Roy, Mukut; Bhattacharyya, Amrit K

    2013-06-01

    Bilateral facial paralysis is a rare clinical entity and presents as a diagnostic challenge. Unlike its unilateral counterpart facial diplegia is seldom secondary to Bell's palsy. Occurring at a frequency of 0.3% to 2% of all facial palsies it often indicates ominous medical conditions. Guillian-Barre syndrome needs to be considered as a differential in all given cases of facial diplegia where timely treatment would be rewarding. Here a case of bilateral facial palsy due to Guillian-Barre syndrome with atypical presentation is reported.

  12. Visual attention during the evaluation of facial attractiveness is influenced by facial angles and smile.

    PubMed

    Kim, Seol Hee; Hwang, Soonshin; Hong, Yeon-Ju; Kim, Jae-Jin; Kim, Kyung-Ho; Chung, Chooryung J

    2018-05-01

    To examine the changes in visual attention influenced by facial angles and smile during the evaluation of facial attractiveness. Thirty-three young adults were asked to rate the overall facial attractiveness (task 1 and 3) or to select the most attractive face (task 2) by looking at multiple panel stimuli consisting of 0°, 15°, 30°, 45°, 60°, and 90° rotated facial photos with or without a smile for three model face photos and a self-photo (self-face). Eye gaze and fixation time (FT) were monitored by the eye-tracking device during the performance. Participants were asked to fill out a subjective questionnaire asking, "Which face was primarily looked at when evaluating facial attractiveness?" When rating the overall facial attractiveness (task 1) for model faces, FT was highest for the 0° face and lowest for the 90° face regardless of the smile ( P < .01). However, when the most attractive face was to be selected (task 2), the FT of the 0° face decreased, while it significantly increased for the 45° face ( P < .001). When facial attractiveness was evaluated with the simplified panels combined with facial angles and smile (task 3), the FT of the 0° smiling face was the highest ( P < .01). While most participants reported that they looked mainly at the 0° smiling face when rating facial attractiveness, visual attention was broadly distributed within facial angles. Laterally rotated faces and presence of a smile highly influence visual attention during the evaluation of facial esthetics.

  13. Effects of a small talking facial image on autonomic activity: the moderating influence of dispositional BIS and BAS sensitivities and emotions.

    PubMed

    Ravaja, Niklas

    2004-01-01

    We examined the moderating influence of dispositional behavioral inhibition system and behavioral activation system (BAS) sensitivities, Negative Affect, and Positive Affect on the relationship between a small moving vs. static facial image and autonomic responses when viewing/listening to news messages read by a newscaster among 36 young adults. Autonomic parameters measured were respiratory sinus arrhythmia (RSA), low-frequency (LF) component of heart rate variability (HRV), electrodermal activity, and pulse transit time (PTT). The results showed that dispositional BAS sensitivity, particularly BAS Fun Seeking, and Negative Affect interacted with facial image motion in predicting autonomic nervous system activity. A moving facial image was related to lower RSA and LF component of HRV and shorter PTTs as compared to a static facial image among high BAS individuals. Even a small talking facial image may contribute to sustained attentional engagement among high BAS individuals, given that the BAS directs attention toward the positive cue and a moving social stimulus may act as a positive incentive for high BAS individuals.

  14. Hypoglossal-facial nerve "side"-to-side neurorrhaphy for facial paralysis resulting from closed temporal bone fractures.

    PubMed

    Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song

    2018-06-06

    Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal

  15. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The face is not an empty canvas: how facial expressions interact with facial appearance.

    PubMed

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  17. [Integration of the functional signal of intraoperative EMG of the facial nerve in to navigation model for surgery of the petrous bone].

    PubMed

    Strauss, G; Strauss, M; Lüders, C; Stopp, S; Shi, J; Dietz, A; Lüth, T

    2008-10-01

    PROBLEM DEFINITION: The goal of this work is the integration of the information of the intraoperative EMG monitoring of the facial nerve into the radiological data of the petrous bone. The following hypotheses are to be examined: (I) the N. VII can be determined intraoperatively with a high reliability by the stimulation-probe. A computer program is able to discriminate true-positive EMG signals from false-positive artifacts. (II) The course of the facial nerve can be registered in a three-dimensional area by EMG signals at a nerve model in the lab test. The individual items of the nerve can be combined into a route model. The route model can be integrated into the data of digital volume tomography (DVT). (I) Intraoperative EMG signals of the facial nerve were classified at 128 measurements by an automatic software. The results were correlated with the actual intraoperative situation. (II) The nerve phantom was designed and a DVT data set was provided. Phantom was registered with a navigation system (Karl Storz NPU, Tuttlingen, Germany). The stimulation probe of the EMG-system was tracked by the navigation system. The navigation system was extended by a processing unit (MiMed, Technische Universität München, Germany). Thus the classified EMG parameters of the facial route can be received, processed and be generated to a model of the facial nerve route. The operability was examined at 120 (10 x 12) measuring points. The evaluation of the examined algorithm for classification EMG-signals of the facial nerve resulted as correct in all measuring events. In all 10 attempts it succeeded to visualize the nerve route as three-dimensional model. The different sizes of the individual measuring points reflect the appropriate values of Istim and UEMG correctly. This work proves the feasibility of an automatic classification of an intraoperative EMG signal of the facial nerve by a processing unit. Furthermore the work shows the feasibility of tracking of the position of the

  18. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  19. Large Intratemporal Facial Nerve Schwannoma without Facial Palsy: Surgical Strategy of Tumor Removal and Functional Reconstruction.

    PubMed

    Yetiser, Sertac

    2018-06-08

     Three patients with large intratemporal facial schwannomas underwent tumor removal and facial nerve reconstruction with hypoglossal anastomosis. The surgical strategy for the cases was tailored to the location of the mass and its extension along the facial nerve.  To provide data on the different clinical aspects of facial nerve schwannoma, the appropriate planning for management, and the predictive outcomes of facial function.  Three patients with facial schwannomas (two men and one woman, ages 45, 36, and 52 years, respectively) who presented to the clinic between 2009 and 2015 were reviewed. They all had hearing loss but normal facial function. All patients were operated on with radical tumor removal via mastoidectomy and subtotal petrosectomy and simultaneous cranial nerve (CN) 7- CN 12 anastomosis.  Multiple segments of the facial nerve were involved ranging in size from 3 to 7 cm. In the follow-up period of 9 to 24 months, there was no tumor recurrence. Facial function was scored House-Brackmann grades II and III, but two patients are still in the process of functional recovery.  Conservative treatment with sparing of the nerve is considered in patients with small tumors. Excision of a large facial schwannoma with immediate hypoglossal nerve grafting as a primary procedure can provide satisfactory facial nerve function. One of the disadvantages of performing anastomosis is that there is not enough neural tissue just before the bifurcation of the main stump to provide neural suturing without tension because middle fossa extension of the facial schwannoma frequently involves the main facial nerve at the stylomastoid foramen. Reanimation should be processed with extensive backward mobilization of the hypoglossal nerve. Georg Thieme Verlag KG Stuttgart · New York.

  20. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  1. Fusion of Modis and Palsar Principal Component Images Through Curvelet Transform for Land Cover Classification

    NASA Astrophysics Data System (ADS)

    Singh, Dharmendra; Kumar, Harish

    -tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.

  2. Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis.

    PubMed

    Holtmann, Laura C; Eckstein, Anja; Stähr, Kerstin; Xing, Minzhi; Lang, Stephan; Mattheis, Stefan

    2017-08-01

    Peripheral paralysis of the facial nerve is the most frequent of all cranial nerve disorders. Despite advances in facial surgery, the functional and aesthetic reconstruction of a paralyzed face remains a challenge. Graduated minimally invasive facial reanimation is based on a modular principle. According to the patients' needs, precondition, and expectations, the following modules can be performed: temporalis muscle transposition and facelift, nasal valve suspension, endoscopic brow lift, and eyelid reconstruction. Applying a concept of a graduated minimally invasive facial reanimation may help minimize surgical trauma and reduce morbidity. Twenty patients underwent a graduated minimally invasive facial reanimation. A retrospective chart review was performed with a follow-up examination between 1 and 8 months after surgery. The FACEgram software was used to calculate pre- and postoperative eyelid closure, the level of brows, nasal, and philtral symmetry as well as oral commissure position at rest and oral commissure excursion with smile. As a patient-oriented outcome parameter, the Glasgow Benefit Inventory questionnaire was applied. There was a statistically significant improvement in the postoperative score of eyelid closure, brow asymmetry, nasal asymmetry, philtral asymmetry as well as oral commissure symmetry at rest (p < 0.05). Smile evaluation revealed no significant change of oral commissure excursion. The mean Glasgow Benefit Inventory score indicated substantial improvement in patients' overall quality of life. If a primary facial nerve repair or microneurovascular tissue transfer cannot be applied, graduated minimally invasive facial reanimation is a promising option to restore facial function and symmetry at rest.

  3. Mime therapy improves facial symmetry in people with long-term facial nerve paresis: a randomised controlled trial.

    PubMed

    Beurskens, Carien H G; Heymans, Peter G

    2006-01-01

    What is the effect of mime therapy on facial symmetry and severity of paresis in people with facial nerve paresis? Randomised controlled trial. 50 people recruited from the Outpatient department of two metropolitan hospitals with facial nerve paresis for more than nine months. The experimental group received three months of mime therapy consisting of massage, relaxation, inhibition of synkinesis, and co-ordination and emotional expression exercises. The control group was placed on a waiting list. Assessments were made on admission to the trial and three months later by a measurer blinded to group allocation. Facial symmetry was measured using the Sunnybrook Facial Grading System. Severity of paresis was measured using the House-Brackmann Facial Grading System. After three months of mime therapy, the experimental group had improved their facial symmetry by 20.4 points (95% CI 10.4 to 30.4) on the Sunnybrook Facial Grading System compared with the control group. In addition, the experimental group had reduced the severity of their paresis by 0.6 grade (95% CI 0.1 to 1.1) on the House-Brackmann Facial Grading System compared with the control group. These effects were independent of age, sex, and duration of paresis. Mime therapy improves facial symmetry and reduces the severity of paresis in people with facial nerve paresis.

  4. Guide to Understanding Facial Palsy

    MedlinePlus

    ... to many different facial muscles. These muscles control facial expression. The coordinated activity of this nerve and these ... involves a weakness of the muscles responsible for facial expression and side-to-side eye movement. Moebius syndrome ...

  5. Managing the Pediatric Facial Fracture

    PubMed Central

    Cole, Patrick; Kaufman, Yoav; Hollier, Larry H.

    2009-01-01

    Facial fracture management is often complex and demanding, particularly within the pediatric population. Although facial fractures in this group are uncommon relative to their incidence in adult counterparts, a thorough understanding of issues relevant to pediatric facial fracture management is critical to optimal long-term success. Here, we discuss several issues germane to pediatric facial fractures and review significant factors in their evaluation, diagnosis, and management. PMID:22110800

  6. [Facial paralysis in children].

    PubMed

    Muler, H; Paquelin, F; Cotin, G; Luboinski, B; Henin, J M

    1975-01-01

    Facial paralyses in children may be grouped under headings displaying a certain amount of individuality. Chronologically, first to be described are neonatal facial paralyses. These are common and are nearly always cured within a few days. Some of these cases are due to the mastoid being crushed at birth with or without the use of forceps. The intra-osseous pathway of the facial nerve is then affected throughout its length. However, a cure is often spontaneous. When this desirable development does not take place within three months, the nerve should be freed by decompressive surgery. The special anatomy of the facial nerve in the new-born baby makes this a delicate operation. Later, in all stages of acute otitis, acute mastoiditis or chronic otitis, facial paralysis can be seen. Treatment depends on the stage reached by the otitis: paracentesis, mastoidectomy, various scraping procedures, and, of course, antibiotherapy. The other causes of facial paralysis in children are very much less common: a frigore or viral, traumatic, occur ring in the course of acute poliomyelitis, shingles or tumours of the middle ear. To these must be added exceptional causes such as vitamin D intoxication, idiopathic hypercalcaemia and certain haemopathies.

  7. [Facial tics and spasms].

    PubMed

    Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J

    2014-01-01

    Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.

  8. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Outcome of different facial nerve reconstruction techniques.

    PubMed

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  10. Recognizing Facial Expressions Automatically from Video

    NASA Astrophysics Data System (ADS)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  11. More than mere mimicry? The influence of emotion on rapid facial reactions to faces.

    PubMed

    Moody, Eric J; McIntosh, Daniel N; Mann, Laura J; Weisser, Kimberly R

    2007-05-01

    Within a second of seeing an emotional facial expression, people typically match that expression. These rapid facial reactions (RFRs), often termed mimicry, are implicated in emotional contagion, social perception, and embodied affect, yet ambiguity remains regarding the mechanism(s) involved. Two studies evaluated whether RFRs to faces are solely nonaffective motor responses or whether emotional processes are involved. Brow (corrugator, related to anger) and forehead (frontalis, related to fear) activity were recorded using facial electromyography (EMG) while undergraduates in two conditions (fear induction vs. neutral) viewed fear, anger, and neutral facial expressions. As predicted, fear induction increased fear expressions to angry faces within 1000 ms of exposure, demonstrating an emotional component of RFRs. This did not merely reflect increased fear from the induction, because responses to neutral faces were unaffected. Considering RFRs to be merely nonaffective automatic reactions is inaccurate. RFRs are not purely motor mimicry; emotion influences early facial responses to faces. The relevance of these data to emotional contagion, autism, and the mirror system-based perspectives on imitation is discussed.

  12. A neurophysiological study of facial numbness in multiple sclerosis: Integration with clinical data and imaging findings.

    PubMed

    Koutsis, Georgios; Kokotis, Panagiotis; Papagianni, Aikaterini E; Evangelopoulos, Maria-Eleftheria; Kilidireas, Constantinos; Karandreas, Nikolaos

    2016-09-01

    To integrate neurophysiological findings with clinical and imaging data in a consecutive series of multiple sclerosis (MS) patients developing facial numbness during the course of an MS attack. Nine consecutive patients with MS and recent-onset facial numbness were studied clinically, imaged with routine MRI, and assessed neurophysiologically with trigeminal somatosensory evoked potential (TSEP), blink reflex (BR), masseter reflex (MR), facial nerve conduction, facial muscle and masseter EMG studies. All patients had unilateral facial hypoesthesia on examination and lesions in the ipsilateral pontine tegmentum on MRI. All patients had abnormal TSEPs upon stimulation of the affected side, excepting one that was tested following remission of numbness. BR was the second most sensitive neurophysiological method with 6/9 examinations exhibiting an abnormal R1 component. The MR was abnormal in 3/6 patients, always on the affected side. Facial conduction and EMG studies were normal in all patients but one. Facial numbness was always related to abnormal TSEPs. A concomitant R1 abnormality on BR allowed localization of the responsible pontine lesion, which closely corresponded with MRI findings. We conclude that neurophysiological assessment of MS patients with facial numbness is a sensitive tool, which complements MRI, and can improve lesion localization. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Facial transplantation for massive traumatic injuries.

    PubMed

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Shades of Emotion: What the Addition of Sunglasses or Masks to Faces Reveals about the Development of Facial Expression Processing

    ERIC Educational Resources Information Center

    Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa

    2012-01-01

    Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…

  15. Quality of life assessment in facial palsy: validation of the Dutch Facial Clinimetric Evaluation Scale.

    PubMed

    Kleiss, Ingrid J; Beurskens, Carien H G; Stalmeier, Peep F M; Ingels, Koen J A O; Marres, Henri A M

    2015-08-01

    This study aimed at validating an existing health-related quality of life questionnaire for patients with facial palsy for implementation in the Dutch language and culture. The Facial Clinimetric Evaluation Scale was translated into the Dutch language using a forward-backward translation method. A pilot test with the translated questionnaire was performed in 10 patients with facial palsy and 10 normal subjects. Finally, cross-cultural adaption was accomplished at our outpatient clinic for facial palsy. Analyses for internal consistency, test-retest reliability, construct validity and responsiveness were performed. Ninety-three patients completed the Dutch Facial Clinimetric Evaluation Scale, the Dutch Facial Disability Index, and the Dutch Short Form (36) Health Survey. Cronbach's α, representing internal consistency, was 0.800. Test-retest reliability was shown by an intraclass correlation coefficient of 0.737. Correlations with the House-Brackmann score, Sunnybrook score, Facial Disability Index physical function, and social/well-being function were -0.292, 0.570, 0.713, and 0.575, respectively. The SF-36 domains correlate best with the FaCE social function domain, with the strongest correlation between the both social function domains (r = 0.576). The FaCE score did statistically significantly increase in 35 patients receiving botulinum toxin type A (P = 0.042, Student t test). The domains 'facial comfort' and 'social function' improved statistically significantly as well (P = 0.022 and P = 0.046, respectively, Student t-test). The Dutch Facial Clinimetric Evaluation Scale shows good psychometric values and can be implemented in the management of Dutch-speaking patients with facial palsy in the Netherlands. Translation of the instrument into other languages may lead to widespread use, making evaluation and comparison possible among different providers.

  16. Facial nerve conduction after sclerotherapy in children with facial lymphatic malformations: report of two cases.

    PubMed

    Lin, Pei-Jung; Guo, Yuh-Cherng; Lin, Jan-You; Chang, Yu-Tang

    2007-04-01

    Surgical excision is thought to be the standard treatment of choice for lymphatic malformations. However, when the lesions are limited to the face only, surgical scar and facial nerve injury may impair cosmetics and facial expression. Sclerotherapy, an injection of a sclerosing agent directly through the skin into a lesion, is an alternative method. By evaluating facial nerve conduction, we observed the long-term effect of facial lymphatic malformations after intralesional injection of OK-432 and correlated the findings with anatomic outcomes. One 12-year-old boy with a lesion over the right-side preauricular area adjacent to the main trunk of facial nerve and the other 5-year-old boy with a lesion in the left-sided cheek involving the buccinator muscle were enrolled. The follow-up data of more than one year, including clinical appearance, computed tomography (CT) scan and facial nerve evaluation were collected. The facial nerve conduction study was normal in both cases. Blink reflex in both children revealed normal results as well. Complete resolution was noted on outward appearance and CT scan. The neurophysiologic data were compatible with good anatomic and functional outcomes. Our report suggests that the inflammatory reaction of OK-432 did not interfere with adjacent facial nerve conduction.

  17. Selective attention to a facial feature with and without facial context: an ERP-study.

    PubMed

    Wijers, A A; Van Besouw, N J P; Mulder, G

    2002-04-01

    The present experiment addressed the question whether selectively attending to a facial feature (mouth shape) would benefit from the presence of a correct facial context. Subjects attended selectively to one of two possible mouth shapes belonging to photographs of a face with a happy or sad expression, respectively. These mouths were presented randomly either in isolation, embedded in the original photos, or in an exchanged facial context. The ERP effect of attending mouth shape was a lateral posterior negativity, anterior positivity with an onset latency of 160-200 ms; this effect was completely unaffected by the type of facial context. When the mouth shape and the facial context conflicted, this resulted in a medial parieto-occipital positivity with an onset latency of 180 ms, independent of the relevance of the mouth shape. Finally, there was a late (onset at approx. 400 ms) expression (happy vs. sad) effect, which was strongly lateralized to the right posterior hemisphere and was most prominent for attended stimuli in the correct facial context. For the isolated mouth stimuli, a similarly distributed expression effect was observed at an earlier latency range (180-240 ms). These data suggest the existence of separate, independent and neuroanatomically segregated processors engaged in the selective processing of facial features and the detection of contextual congruence and emotional expression of face stimuli. The data do not support that early selective attention processes benefit from top-down constraints provided by the correct facial context.

  18. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma

    PubMed Central

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.

    2014-01-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy. PMID:25083397

  19. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma.

    PubMed

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W

    2014-08-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.

  20. A new atlas for the evaluation of facial features: advantages, limits, and applicability.

    PubMed

    Ritz-Timme, Stefanie; Gabriel, Peter; Obertovà, Zuzana; Boguslawski, Melanie; Mayer, F; Drabik, A; Poppa, Pasquale; De Angelis, Danilo; Ciaffi, Romina; Zanotti, Benedetta; Gibelli, Daniele; Cattaneo, Cristina

    2011-03-01

    Methods for the verification of the identity of offenders in cases involving video-surveillance images in criminal investigation events are currently under scrutiny by several forensic experts around the globe. The anthroposcopic, or morphological, approach based on facial features is the most frequently used by international forensic experts. However, a specific set of applicable features has not yet been agreed on by the experts. Furthermore, population frequencies of such features have not been recorded, and only few validation tests have been published. To combat and prevent crime in Europe, the European Commission funded an extensive research project dedicated to the optimization of methods for facial identification of persons on photographs. Within this research project, standardized photographs of 900 males between 20 and 31 years of age from Germany, Italy, and Lithuania were acquired. Based on these photographs, 43 facial features were described and evaluated in detail. These efforts led to the development of a new model of a morphologic atlas, called DMV atlas ("Düsseldorf Milan Vilnius," from the participating cities). This study is the first attempt at verifying the feasibility of this atlas as a preliminary step to personal identification by exploring the intra- and interobserver error. The analysis yielded mismatch percentages from 19% to 39%, which reflect the subjectivity of the approach and suggest caution in verifying personal identity only from the classification of facial features. Nonetheless, the use of the atlas leads to a significant improvement of consistency in the evaluation.

  1. [Partial facial duplication (a rare diprosopus): Case report and review of the literature].

    PubMed

    Es-Seddiki, A; Rkain, M; Ayyad, A; Nkhili, H; Amrani, R; Benajiba, N

    2015-12-01

    Diprosopus, or partial facial duplication, is a very rare congenital abnormality. It is a rare form of conjoined twins. Partial facial duplication may be symmetric or not and may involve the nose, the maxilla, the mandible, the palate, the tongue and the mouth. A male newborn springing from inbred parents was admitted at his first day of life for facial deformity. He presented with hypertelorism, 2 eyes, a tendency to nose duplication (flatted large nose, 2 columellae, 2 lateral nostrils separated in the midline by a third deformed hole), two mouths and a duplicated maxilla. Laboratory tests were normal. The cranio-facial CT confirmed the maxillary duplication. This type of cranio-facial duplication is a rare entity with about 35 reported cases in the literature. Our patient was similar to a rare case of living diprosopus reported by Stiehm in 1972. Diprosopus is often associated with abnormalities of the gastrointestinal tract, the central nervous system, the cardiovascular and respiratory systems and with a high incidence of cleft lip and palate. Surgical treatment consists in the resection of the duplicated components. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  2. Advances in facial reanimation.

    PubMed

    Tate, James R; Tollefson, Travis T

    2006-08-01

    Facial paralysis often has a significant emotional impact on patients. Along with the myriad of new surgical techniques in managing facial paralysis comes the challenge of selecting the most effective procedure for the patient. This review delineates common surgical techniques and reviews state-of-the-art techniques. The options for dynamic reanimation of the paralyzed face must be examined in the context of several patient factors, including age, overall health, and patient desires. The best functional results are obtained with direct facial nerve anastomosis and interpositional nerve grafts. In long-standing facial paralysis, temporalis muscle transfer gives a dependable and quick result. Microvascular free tissue transfer is a reliable technique with reanimation potential whose results continue to improve as microsurgical expertise increases. Postoperative results can be improved with ancillary soft tissue procedures, as well as botulinum toxin. The paper provides an overview of recent advances in facial reanimation, including preoperative assessment, surgical reconstruction options, and postoperative management.

  3. Facial paralysis for the plastic surgeon.

    PubMed

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory Rd; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis.The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain.With respect to facial paralysis, surgeons tend to focus on the surgical, or 'hands-on', aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper.

  4. Facial paralysis for the plastic surgeon

    PubMed Central

    Kosins, Aaron M; Hurvitz, Keith A; Evans, Gregory RD; Wirth, Garrett A

    2007-01-01

    Facial paralysis presents a significant and challenging reconstructive problem for plastic surgeons. An aesthetically pleasing and acceptable outcome requires not only good surgical skills and techniques, but also knowledge of facial nerve anatomy and an understanding of the causes of facial paralysis. The loss of the ability to move the face has both social and functional consequences for the patient. At the Facial Palsy Clinic in Edinburgh, Scotland, 22,954 patients were surveyed, and over 50% were found to have a considerable degree of psychological distress and social withdrawal as a consequence of their facial paralysis. Functionally, patients present with unilateral or bilateral loss of voluntary and nonvoluntary facial muscle movements. Signs and symptoms can include an asymmetric smile, synkinesis, epiphora or dry eye, abnormal blink, problems with speech articulation, drooling, hyperacusis, change in taste and facial pain. With respect to facial paralysis, surgeons tend to focus on the surgical, or ‘hands-on’, aspect. However, it is believed that an understanding of the disease process is equally (if not more) important to a successful surgical outcome. The purpose of the present review is to describe the anatomy and diagnostic patterns of the facial nerve, and the epidemiology and common causes of facial paralysis, including clinical features and diagnosis. Treatment options for paralysis are vast, and may include nerve decompression, facial reanimation surgery and botulinum toxin injection, but these are beyond the scope of the present paper. PMID:19554190

  5. Augmentation of linear facial anthropometrics through modern morphometrics: a facial convexity example.

    PubMed

    Wei, R; Claes, P; Walters, M; Wholley, C; Clement, J G

    2011-06-01

    The facial region has traditionally been quantified using linear anthropometrics. These are well established in dentistry, but require expertise to be used effectively. The aim of this study was to augment the utility of linear anthropometrics by applying them in conjunction with modern 3-D morphometrics. Facial images of 75 males and 94 females aged 18-25 years with self-reported Caucasian ancestry were used. An anthropometric mask was applied to establish corresponding quasi-landmarks on the images in the dataset. A statistical face-space, encoding shape covariation, was established. The facial median plane was extracted facilitating both manual and automated indication of commonly used midline landmarks. From both indications, facial convexity angles were calculated and compared. The angles were related to the face-space using a regression based pathway enabling the visualization of facial form associated with convexity variation. Good agreement between the manual and automated angles was found (Pearson correlation: 0.9478-0.9474, Dahlberg root mean squared error: 1.15°-1.24°). The population mean angle was 166.59°-166.29° (SD 5.09°-5.2°) for males-females. The angle-pathway provided valuable feedback. Linear facial anthropometrics can be extended when used in combination with a face-space derived from 3-D scans and the exploration of property pathways inferred in a statistically verifiable way. © 2011 Australian Dental Association.

  6. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    PubMed

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  7. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  8. A Features Selection for Crops Classification

    NASA Astrophysics Data System (ADS)

    Liu, Yifan; Shao, Luyi; Yin, Qiang; Hong, Wen

    2016-08-01

    The components of the polarimetric target decomposition reflect the differences of target since they linked with the scattering properties of the target and can be imported into SVM as the classification features. The result of decomposition usually concentrate on part of the components. Selecting a combination of components can reduce the features that importing into the SVM. The features reduction can lead to less calculation and targeted classification of one target when we classify a multi-class area. In this research, we import different combinations of features into the SVM and find a better combination for classification with a data of AGRISAR.

  9. Marquardt’s Facial Golden Decagon Mask and Its Fitness with South Indian Facial Traits

    PubMed Central

    Gandikota, Chandra Sekhar; Yadagiri, Poornima K; Manne, Ranjit; Juvvadi, Shubhaker Rao; Farah, Tamkeen; Vattipelli, Shilpa; Gumbelli, Sangeetha

    2016-01-01

    Introduction The mathematical ratio of 1:1.618 which is famously known as golden ratio seems to appear recurrently in beautiful things in nature as well as in other things that are seen as beautiful. Dr. Marquardt developed a facial golden mask that contains and includes all of the one-dimensional and two-dimensional geometric golden elements formed from the golden ratio and he claimed that beauty is universal, beautiful faces conforms to the facial golden mask regardless of sex and race. Aim The purpose of this study was to evaluate the goodness of fit of the golden facial mask with the South Indian facial traits. Materials and Methods A total of 150 subjects (75 males & 75 females) with attractive faces were selected with cephalometric orthodontic standards of a skeletal class I relation. The facial aesthetics was confirmed by the aesthetic evaluation of the frontal photographs of the subjects by a panel of ten evaluators including five orthodontists and five maxillofacial surgeons. The well-proportioned photographs were superimposed with the Golden mask along the reference lines, to evaluate the goodness of fit. Results South Indian males and females invariably show a wider inter-zygomatic and inter-gonial width than the golden mask. Most of the South Indian females and males show decreased mid-facial height compared to the golden mask, while the total facial height is more or less equal to the golden mask. Conclusion Ethnic or individual discrepancies cannot be totally ignored as in our study the mask did not fit exactly with the South Indian facial traits but, the beauty ratios came closer to those of the mask. To overcome this difficulty, there is a need to develop variants of golden facial mask for different ethnic groups. PMID:27190951

  10. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for

  11. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of

  12. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  13. Optimising ballistic facial coverage from military fragmenting munitions: a consensus statement.

    PubMed

    Breeze, J; Tong, D C; Powers, D; Martin, N A; Monaghan, A M; Evriviades, D; Combes, J; Lawton, G; Taylor, C; Kay, A; Baden, J; Reed, B; MacKenzie, N; Gibbons, A J; Heppell, S; Rickard, R F

    2017-02-01

    VIRTUS is the first United Kingdom (UK) military personal armour system to provide components that are capable of protecting the whole face from low velocity ballistic projectiles. Protection is modular, using a helmet worn with ballistic eyewear, a visor, and a mandibular guard. When all four components are worn together the face is completely covered, but the heat, discomfort, and weight may not be optimal in all types of combat. We organized a Delphi consensus group analysis with 29 military consultant surgeons from the UK, United States, Canada, Australia, and New Zealand to identify a potential hierarchy of functional facial units in order of importance that require protection. We identified the causes of those facial injuries that are hardest to reconstruct, and the most effective combinations of facial protection. Protection is required from both penetrating projectiles and burns. There was strong consensus that blunt injury to the facial skeleton was currently not a military priority. Functional units that should be prioritised are eyes and eyelids, followed consecutively by the nose, lips, and ears. Twenty-nine respondents felt that the visor was more important than the mandibular guard if only one piece was to be worn. Essential cover of the brain and eyes is achieved from all directions using a combination of helmet and visor. Nasal cover currently requires the mandibular guard unless the visor can be modified to cover it as well. Any such prototype would need extensive ergonomics and assessment of integration, as any changes would have to be acceptable to the people who wear them in the long term. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  14. Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition.

    PubMed

    de la Rosa, Stephan; Fademrecht, Laura; Bülthoff, Heinrich H; Giese, Martin A; Curio, Cristóbal

    2018-06-01

    Motor-based theories of facial expression recognition propose that the visual perception of facial expression is aided by sensorimotor processes that are also used for the production of the same expression. Accordingly, sensorimotor and visual processes should provide congruent emotional information about a facial expression. Here, we report evidence that challenges this view. Specifically, the repeated execution of facial expressions has the opposite effect on the recognition of a subsequent facial expression than the repeated viewing of facial expressions. Moreover, the findings of the motor condition, but not of the visual condition, were correlated with a nonsensory condition in which participants imagined an emotional situation. These results can be well accounted for by the idea that facial expression recognition is not always mediated by motor processes but can also be recognized on visual information alone.

  15. Facial Soft Tissue Trauma

    PubMed Central

    Kretlow, James D.; McKnight, Aisha J.; Izaddoost, Shayan A.

    2010-01-01

    Traumatic facial soft tissue injuries are commonly encountered in the emergency department by plastic surgeons and other providers. Although rarely life-threatening, the treatment of these injuries can be complex and may have significant impact on the patient's facial function and aesthetics. This article provides a review of the relevant literature related to this topic and describes the authors' approach to the evaluation and management of the patient with facial soft tissue injuries. PMID:22550459

  16. Quantitative Magnetic Resonance Imaging Volumetry of Facial Muscles in Healthy Patients with Facial Palsy

    PubMed Central

    Volk, Gerd F.; Karamyan, Inna; Klingner, Carsten M.; Reichenbach, Jürgen R.

    2014-01-01

    Background: Magnetic resonance imaging (MRI) has not yet been established systematically to detect structural muscular changes after facial nerve lesion. The purpose of this pilot study was to investigate quantitative assessment of MRI muscle volume data for facial muscles. Methods: Ten healthy subjects and 5 patients with facial palsy were recruited. Using manual or semiautomatic segmentation of 3T MRI, volume measurements were performed for the frontal, procerus, risorius, corrugator supercilii, orbicularis oculi, nasalis, zygomaticus major, zygomaticus minor, levator labii superioris, orbicularis oris, depressor anguli oris, depressor labii inferioris, and mentalis, as well as for the masseter and temporalis as masticatory muscles for control. Results: All muscles except the frontal (identification in 4/10 volunteers), procerus (4/10), risorius (6/10), and zygomaticus minor (8/10) were identified in all volunteers. Sex or age effects were not seen (all P > 0.05). There was no facial asymmetry with exception of the zygomaticus major (larger on the left side; P = 0.012). The exploratory examination of 5 patients revealed considerably smaller muscle volumes on the palsy side 2 months after facial injury. One patient with chronic palsy showed substantial muscle volume decrease, which also occurred in another patient with incomplete chronic palsy restricted to the involved facial area. Facial nerve reconstruction led to mixed results of decreased but also increased muscle volumes on the palsy side compared with the healthy side. Conclusions: First systematic quantitative MRI volume measures of 5 different clinical presentations of facial paralysis are provided. PMID:25289366

  17. Coding and quantification of a facial expression for pain in lambs.

    PubMed

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder.

    PubMed

    Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can

    2017-01-01

    Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated

  19. [Prosopagnosia and facial expression recognition].

    PubMed

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  20. Management of Chronic Facial Pain

    PubMed Central

    Williams, Christopher G.; Dellon, A. Lee; Rosson, Gedge D.

    2009-01-01

    Pain persisting for at least 6 months is defined as chronic. Chronic facial pain conditions often take on lives of their own deleteriously changing the lives of the sufferer. Although much is known about facial pain, it is clear that those physicians who treat these conditions should continue elucidating the mechanisms and defining successful treatment strategies for these life-changing conditions. This article will review many of the classic causes of chronic facial pain due to the trigeminal nerve and its branches that are amenable to surgical therapies. Testing of facial sensibility is described and its utility introduced. We will also introduce some of the current hypotheses of atypical facial pain and headaches secondary to chronic nerve compressions and will suggest possible treatment strategies. PMID:22110799

  1. Facial mimicry in its social setting

    PubMed Central

    Seibt, Beate; Mühlberger, Andreas; Likowski, Katja U.; Weyers, Peter

    2015-01-01

    In interpersonal encounters, individuals often exhibit changes in their own facial expressions in response to emotional expressions of another person. Such changes are often called facial mimicry. While this tendency first appeared to be an automatic tendency of the perceiver to show the same emotional expression as the sender, evidence is now accumulating that situation, person, and relationship jointly determine whether and for which emotions such congruent facial behavior is shown. We review the evidence regarding the moderating influence of such factors on facial mimicry with a focus on understanding the meaning of facial responses to emotional expressions in a particular constellation. From this, we derive recommendations for a research agenda with a stronger focus on the most common forms of encounters, actual interactions with known others, and on assessing potential mediators of facial mimicry. We conclude that facial mimicry is modulated by many factors: attention deployment and sensitivity, detection of valence, emotional feelings, and social motivations. We posit that these are the more proximal causes of changes in facial mimicry due to changes in its social setting. PMID:26321970

  2. Gender classification from video under challenging operating conditions

    NASA Astrophysics Data System (ADS)

    Mendoza-Schrock, Olga; Dong, Guozhu

    2014-06-01

    The literature is abundant with papers on gender classification research. However the majority of such research is based on the assumption that there is enough resolution so that the subject's face can be resolved. Hence the majority of the research is actually in the face recognition and facial feature area. A gap exists for gender classification under challenging operating conditions—different seasonal conditions, different clothing, etc.—and when the subject's face cannot be resolved due to lack of resolution. The Seasonal Weather and Gender (SWAG) Database is a novel database that contains subjects walking through a scene under operating conditions that span a calendar year. This paper exploits a subset of that database—the SWAG One dataset—using data mining techniques, traditional classifiers (ex. Naïve Bayes, Support Vector Machine, etc.) and traditional (canny edge detection, etc.) and non-traditional (height/width ratios, etc.) feature extractors to achieve high correct gender classification rates (greater than 85%). Another novelty includes exploiting frame differentials.

  3. Facial neuroma masquerading as acoustic neuroma.

    PubMed

    Sayegh, Eli T; Kaur, Gurvinder; Ivan, Michael E; Bloch, Orin; Cheung, Steven W; Parsa, Andrew T

    2014-10-01

    Facial nerve neuromas are rare benign tumors that may be initially misdiagnosed as acoustic neuromas when situated near the auditory apparatus. We describe a patient with a large cystic tumor with associated trigeminal, facial, audiovestibular, and brainstem dysfunction, which was suspicious for acoustic neuroma on preoperative neuroimaging. Intraoperative investigation revealed a facial nerve neuroma located in the cerebellopontine angle and internal acoustic canal. Gross total resection of the tumor via retrosigmoid craniotomy was curative. Transection of the facial nerve necessitated facial reanimation 4 months later via hypoglossal-facial cross-anastomosis. Clinicians should recognize the natural history, diagnostic approach, and management of this unusual and mimetic lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Comparative Discussion on Psychophysiological Effect of Self-administered Facial Massage by Treatment Method

    NASA Astrophysics Data System (ADS)

    Nozawa, Akio; Takei, Yuya

    The aim of study was to quantitatively evaluate the effects of self-administered facial massage, which was done by hand or facial roller. In this study, the psychophysiological effects of facial massage were evaluated. The central nerves system and the autonomic nervous system were administered to evaluate physiological system. The central nerves system was assessed by Electroencephalogram (EEG). The autonomic nervous system were assessed by peripheral skin temperature(PST) and heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, the high-frequency components (HF) were evaluated. State-Trait Anxiety Inventory (STAI), Profile of Mood Status (POMS) and subjective sensory amount with Visual Analog Scale (VAS) were administered to evaluate psychological status. These results suggest that kept brain activity and had strong effects on stress alleviation.

  5. Functionally dissociated aspects in anterior and posterior electrocortical processing of facial threat.

    PubMed

    Schutter, Dennis J L G; de Haan, Edward H F; van Honk, Jack

    2004-06-01

    The angry facial expression is an important socially threatening stimulus argued to have evolved to regulate social hierarchies. In the present study, event-related potentials (ERP) were used to investigate the involvement and temporal dynamics of the frontal and parietal regions in the processing of angry facial expressions. Angry, happy and neutral faces were shown to eighteen healthy right-handed volunteers in a passive viewing task. Stimulus-locked ERPs were recorded from the frontal and parietal scalp sites. The P200, N300 and early contingent negativity variation (eCNV) components of the electric brain potentials were investigated. Analyses revealed statistical significant reductions in P200 amplitudes for the angry facial expression on both frontal and parietal electrode sites. Furthermore, apart from being strongly associated with the anterior P200, the N300 showed to be more negative for the angry facial expression in the anterior regions also. Finally, the eCNV was more pronounced over the parietal sites for the angry facial expressions. The present study demonstrated specific electrocortical correlates underlying the processing of angry facial expressions in the anterior and posterior brain sectors. The P200 is argued to indicate valence tagging by a fast and early detection mechanism. The lowered N300 with an anterior distribution for the angry facial expressions indicates more elaborate evaluation of stimulus relevance. The fact that the P200 and the N300 are highly correlated suggests that they reflect different stages of the same anterior evaluation mechanism. The more pronounced posterior eCNV suggests sustained attention to socially threatening information. Copyright 2004 Elsevier B.V.

  6. Variation in the cranial base orientation and facial skeleton in dry skulls sampled from three major populations.

    PubMed

    Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya

    2004-04-01

    The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.

  7. Facial Transplantation Surgery Introduction

    PubMed Central

    2015-01-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea. PMID:26028914

  8. Facial transplantation surgery introduction.

    PubMed

    Eun, Seok-Chan

    2015-06-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea.

  9. Facial reanimation with gracilis muscle transfer neurotized to cross-facial nerve graft versus masseteric nerve: a comparative study using the FACIAL CLIMA evaluating system.

    PubMed

    Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro

    2013-06-01

    Longstanding unilateral facial paralysis is best addressed with microneurovascular muscle transplantation. Neurotization can be obtained from the cross-facial or the masseter nerve. The authors present a quantitative comparison of both procedures using the FACIAL CLIMA system. Forty-seven patients with complete unilateral facial paralysis underwent reanimation with a free gracilis transplant neurotized to either a cross-facial nerve graft (group I, n=20) or to the ipsilateral masseteric nerve (group II, n=27). Commissural displacement and commissural contraction velocity were measured using the FACIAL CLIMA system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using the independent samples t test. Mean percentage of recovery of both parameters were compared between the groups using the independent samples t test. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I (p=0.001 and p=0.014, respectively) but not in group II. Intergroup comparisons showed that both commissural displacement and commissural contraction velocity were higher in group II, with significant differences for commissural displacement (p=0.048). Mean percentage of recovery of both parameters was higher in group II, with significant differences for commissural displacement (p=0.042). Free gracilis muscle transfer neurotized by the masseteric nerve is a reliable technique for reanimation of longstanding facial paralysis. Compared with cross-facial nerve graft neurotization, this technique provides better symmetry and a higher degree of recovery. Therapeutic, III.

  10. Alexithymia and the labeling of facial emotions: response slowing and increased motor and somatosensory processing

    PubMed Central

    2014-01-01

    Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094

  11. Are facial injuries really different? An observational cohort study comparing appearance concern and psychological distress in facial trauma and non-facial trauma patients.

    PubMed

    Rahtz, Emmylou; Bhui, Kamaldeep; Hutchison, Iain; Korszun, Ania

    2018-01-01

    Facial injuries are widely assumed to lead to stigma and significant psychosocial burden. Experimental studies of face perception support this idea, but there is very little empirical evidence to guide treatment. This study sought to address the gap. Data were collected from 193 patients admitted to hospital following facial or other trauma. Ninety (90) participants were successfully followed up 8 months later. Participants completed measures of appearance concern and psychological distress (post-traumatic stress symptoms (PTSS), depressive symptoms, anxiety symptoms). Participants were classified by site of injury (facial or non-facial injury). The overall levels of appearance concern were comparable to those of the general population, and there was no evidence of more appearance concern among people with facial injuries. Women and younger people were significantly more likely to experience appearance concern at baseline. Baseline and 8-month psychological distress, although common in the sample, did not differ according to the site of injury. Changes in appearance concern were, however, strongly associated with psychological distress at follow-up. We conclude that although appearance concern is severe among some people with facial injury, it is not especially different to those with non-facial injuries or the general public; changes in appearance concern, however, appear to correlate with psychological distress. We therefore suggest that interventions might focus on those with heightened appearance concern and should target cognitive bias and psychological distress. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  13. Intratemporal facial nerve ultrastructure in patients with idiopathic facial paralysis: viral infection evidence study.

    PubMed

    Florez, Rosangela Aló Maluza; Lang, Raquel; Veridiano, Adriano Mora; Zanini, Renato de Oliveira; Calió, Pedro Luiz; Simões, Ricardo Dos Santos; Testa, José Ricardo Gurgel

    2010-01-01

    The etiology of idiopathic peripheral facial palsy (IPFP) is still uncertain; however, some authors suggest the possibility of a viral infection. to analyze the ultrastructure of the facial nerve seeking viral evidences that might provide etiological data. We studied 20 patients with peripheral facial palsy (PFP), with moderate to severe FP, of both genders, between 18-60 years of age, from the Clinic of Facial Nerve Disorders. The patients were broken down into two groups - Study: eleven patients with IPFP and Control: nine patients with trauma or tumor-related PFP. The fragments were obtained from the facial nerve sheath or from fragments of its stumps - which would be discarded or sent to pathology exam during the facial nerve repair surgery. The removed tissue was fixed in 2% glutaraldehyde, and studied under Electronic Transmission Microscopy. In the study group we observed an intense repair cellular activity by increased collagen fibers, fibroblasts containing developed organelles, free of viral particles. In the control group this repair activity was not evident, but no viral particles were observed. There were no viral particles, and there were evidences of intense activity of repair or viral infection.

  14. [Relationship between Work Ⅱ type of congenital first branchial cleft anomaly and facial nerve and surgical strategies].

    PubMed

    Zhang, B; Chen, L S; Huang, S L; Liang, L; Gong, X X; Wu, P N; Zhang, S Y; Luo, X N; Zhan, J D; Sheng, X L; Lu, Z M

    2017-10-07

    Objective: To investigate the relationship between Work Ⅱ type of congenital first branchial cleft anomaly (CFBCA) and facial nerve and discuss surgical strategies. Methods: Retrospective analysis of 37 patients with CFBCA who were treated from May 2005 to September 2016. Among 37 cases with CFBCA, 12 males and 25 females; 24 in the left and 13 in the right; the age at diagnosis was from 1 to 76 ( years, with a median age of 20, 24 cases with age of 18 years or less and 13 with age more than 18 years; duration of disease ranged from 1 to 10 years (median of 6 years); 4 cases were recurren after fistula resection. According to the classification of Olsen, all 37 cases were non-cyst (sinus or fistula). External fistula located over the mandibular angle in 28 (75.7%) cases and below the angle in 9 (24.3%) cases. Results: Surgeries were performed successfully in all the 37 cases. It was found that lesions located at anterior of the facial nerve in 13 (35.1%) cases, coursed between the branches in 3 cases (8.1%), and lied in the deep of the facial nerve in 21 (56.8%) cases. CFBCA in female with external fistula below mandibular angle and membranous band was more likely to lie deep of the facial nerve than in male with external fistula over the mandibular angle but without myringeal web. Conclusions: CFBCA in female patients with a external fistula located below the mandibular angle, non-cyst of Olsen or a myringeal web is more likely to lie deep of the facial nerve. Surgeons should particularly take care of the protection of facial nerve in these patients, if necessary, facial nerve monitoring technology can be used during surgery to complete resection of lesions.

  15. Peripheral facial palsy in children.

    PubMed

    Yılmaz, Unsal; Cubukçu, Duygu; Yılmaz, Tuba Sevim; Akıncı, Gülçin; Ozcan, Muazzez; Güzel, Orkide

    2014-11-01

    The aim of this study is to evaluate the types and clinical characteristics of peripheral facial palsy in children. The hospital charts of children diagnosed with peripheral facial palsy were reviewed retrospectively. A total of 81 children (42 female and 39 male) with a mean age of 9.2 ± 4.3 years were included in the study. Causes of facial palsy were 65 (80.2%) idiopathic (Bell palsy) facial palsy, 9 (11.1%) otitis media/mastoiditis, and tumor, trauma, congenital facial palsy, chickenpox, Melkersson-Rosenthal syndrome, enlarged lymph nodes, and familial Mediterranean fever (each 1; 1.2%). Five (6.1%) patients had recurrent attacks. In patients with Bell palsy, female/male and right/left ratios were 36/29 and 35/30, respectively. Of them, 31 (47.7%) had a history of preceding infection. The overall rate of complete recovery was 98.4%. A wide variety of disorders can present with peripheral facial palsy in children. Therefore, careful investigation and differential diagnosis is essential. © The Author(s) 2013.

  16. Values of a Patient and Observer Scar Assessment Scale to Evaluate the Facial Skin Graft Scar.

    PubMed

    Chae, Jin Kyung; Kim, Jeong Hee; Kim, Eun Jung; Park, Kun

    2016-10-01

    The patient and observer scar assessment scale (POSAS) recently emerged as a promising method, reflecting both observer's and patient's opinions in evaluating scar. This tool was shown to be consistent and reliable in burn scar assessment, but it has not been tested in the setting of skin graft scar in skin cancer patients. To evaluate facial skin graft scar applied to POSAS and to compare with objective scar assessment tools. Twenty three patients, who diagnosed with facial cutaneous malignancy and transplanted skin after Mohs micrographic surgery, were recruited. Observer assessment was performed by three independent rates using the observer component of the POSAS and Vancouver scar scale (VSS). Patient self-assessment was performed using the patient component of the POSAS. To quantify scar color and scar thickness more objectively, spectrophotometer and ultrasonography was applied. Inter-observer reliability was substantial with both VSS and the observer component of the POSAS (average measure intraclass coefficient correlation, 0.76 and 0.80, respectively). The observer component consistently showed significant correlations with patients' ratings for the parameters of the POSAS (all p -values<0.05). The correlation between subjective assessment using POSAS and objective assessment using spectrophotometer and ultrasonography showed low relationship. In facial skin graft scar assessment in skin cancer patients, the POSAS showed acceptable inter-observer reliability. This tool was more comprehensive and had higher correlation with patient's opinion.

  17. Effects of task demands on the early neural processing of fearful and happy facial expressions

    PubMed Central

    Itier, Roxane J.; Neath-Tavares, Karly N.

    2017-01-01

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. PMID:28315309

  18. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  19. Impact of facial defect reconstruction on attractiveness and negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick; Ishii, Lisa E

    2015-06-01

    Measure the impact of facial defect reconstruction on observer-graded attractiveness and negative facial perception. Prospective, randomized, controlled experiment. One hundred twenty casual observers viewed images of faces with defects of varying sizes and locations before and after reconstruction as well as normal comparison faces. Observers rated attractiveness, defect severity, and how disfiguring, bothersome, and important to repair they considered each face. Facial defects decreased attractiveness -2.26 (95% confidence interval [CI]: -2.45, -2.08) on a 10-point scale. Mixed effects linear regression showed this attractiveness penalty varied with defect size and location, with large and central defects generating the greatest penalty. Reconstructive surgery increased attractiveness 1.33 (95% CI: 1.18, 1.47), an improvement dependent upon size and location, restoring some defect categories to near normal ranges of attractiveness. Iterated principal factor analysis indicated the disfiguring, important to repair, bothersome, and severity variables were highly correlated and measured a common domain; thus, they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score, representing negative facial perception. The DIBS regression showed defect faces have a 1.5 standard deviation increase in negative perception (DIBS: 1.69, 95% CI: 1.61, 1.77) compared to normal faces, which decreased by a similar magnitude after surgery (DIBS: -1.44, 95% CI: -1.49, -1.38). These findings varied with defect size and location. Surgical reconstruction of facial defects increased attractiveness and decreased negative social facial perception, an impact that varied with defect size and location. These new social perception data add to the evidence base demonstrating the value of high-quality reconstructive surgery. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Imaging the Facial Nerve: A Contemporary Review

    PubMed Central

    Gupta, Sachin; Mends, Francine; Hagiwara, Mari; Fatterpekar, Girish; Roehm, Pamela C.

    2013-01-01

    Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell's palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers. PMID:23766904

  1. Facial nerve palsy due to birth trauma

    MedlinePlus

    Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...

  2. Facial Nerve Paralysis due to Chronic Otitis Media: Prognosis in Restoration of Facial Function after Surgical Intervention

    PubMed Central

    Kim, Jin; Jung, Gu-Hyun; Park, See-Young

    2012-01-01

    Purpose Facial paralysis is an uncommon but significant complication of chronic otitis media (COM). Surgical eradication of the disease is the most viable way to overcome facial paralysis therefrom. In an effort to guide treatment of this rare complication, we analyzed the prognosis of facial function after surgical treatment. Materials and Methods A total of 3435 patients with COM, who underwent various otologic surgeries throughout a period of 20 years, were analyzed retrospectively. Forty six patients (1.33%) had facial nerve paralysis caused by COM. We analyzed prognostic factors including delay of surgery, the extent of disease, presence or absence of cholesteatoma and the type of surgery affecting surgical outcomes. Results Surgical intervention had a good effect on the restoration of facial function in cases of shorter duration of onset of facial paralysis to surgery and cases of sudden onset, without cholesteatoma. No previous ear surgery and healthy bony labyrinth indicated a good postoperative prognosis. Conclusion COM causing facial paralysis is most frequently due to cholesteatoma and the presence of cholesteatoma decreased the effectiveness of surgical treatment and indicated a poor prognosis after surgery. In our experience, early surgical intervention can be crucial to recovery of facial function. To prevent recurrent cholesteatoma, which leads to local destruction of the facial nerve, complete eradication of the disease in one procedure cannot be overemphasized for the treatment of patients with COM. PMID:22477011

  3. Use of Facial Recognition Software to Identify Disaster Victims With Facial Injuries.

    PubMed

    Broach, John; Yong, Rothsovann; Manuell, Mary-Elise; Nichols, Constance

    2017-10-01

    After large-scale disasters, victim identification frequently presents a challenge and a priority for responders attempting to reunite families and ensure proper identification of deceased persons. The purpose of this investigation was to determine whether currently commercially available facial recognition software can successfully identify disaster victims with facial injuries. Photos of 106 people were taken before and after application of moulage designed to simulate traumatic facial injuries. These photos as well as photos from volunteers' personal photo collections were analyzed by using facial recognition software to determine whether this technology could accurately identify a person with facial injuries. The study results suggest that a responder could expect to get a correct match between submitted photos and photos of injured patients between 39% and 45% of the time and a much higher percentage of correct returns if submitted photos were of optimal quality with percentages correct exceeding 90% in most situations. The present results suggest that the use of this software would provide significant benefit to responders. Although a correct result was returned only 40% of the time, this would still likely represent a benefit for a responder trying to identify hundreds or thousands of victims. (Disaster Med Public Health Preparedness. 2017;11:568-572).

  4. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  5. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  6. Noninvasive Facial Rejuvenation. Part 1: Patient-Directed

    PubMed Central

    Commander, Sarah Jane; Chang, Daniel; Fakhro, Abdulla; Nigro, Marjory G.; Lee, Edward I.

    2016-01-01

    A proper knowledge of noninvasive facial rejuvenation is integral to the practice of a cosmetic surgeon. Noninvasive facial rejuvenation can be divided into patient- versus physician-directed modalities. Patient-directed facial rejuvenation combines the use of facial products such as sunscreen, moisturizers, retinoids, α-hydroxy acids, and various antioxidants to both maintain youthful skin and rejuvenate damaged skin. Physicians may recommend and often prescribe certain products, but the patients are in control of this type of facial rejuvenation. On the other hand, physician-directed facial rejuvenation entails modalities that require direct physician involvement, such as neuromodulators, filler injections, laser resurfacing, microdermabrasion, and chemical peels. With the successful integration of each of these modalities, a complete facial regimen can be established and patient satisfaction can be maximized. This article is the first in a three-part series describing noninvasive facial rejuvenation. The authors focus on patient-directed facial rejuvenation. It is important, however, to emphasize that even in a patient-directed modality, a physician's involvement through education and guidance is integral to its success. PMID:27478421

  7. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide

  8. Sleep stage classification with low complexity and low bit rate.

    PubMed

    Virkkala, Jussi; Värri, Alpo; Hasan, Joel; Himanen, Sari-Leena; Müller, Kiti

    2009-01-01

    Standard sleep stage classification is based on visual analysis of central (usually also frontal and occipital) EEG, two-channel EOG, and submental EMG signals. The process is complex, using multiple electrodes, and is usually based on relatively high (200-500 Hz) sampling rates. Also at least 12 bit analog to digital conversion is recommended (with 16 bit storage) resulting in total bit rate of at least 12.8 kbit/s. This is not a problem for in-house laboratory sleep studies, but in the case of online wireless self-applicable ambulatory sleep studies, lower complexity and lower bit rates are preferred. In this study we further developed earlier single channel facial EMG/EOG/EEG-based automatic sleep stage classification. An algorithm with a simple decision tree separated 30 s epochs into wakefulness, SREM, S1/S2 and SWS using 18-45 Hz beta power and 0.5-6 Hz amplitude. Improvements included low complexity recursive digital filtering. We also evaluated the effects of a reduced sampling rate, reduced number of quantization steps and reduced dynamic range on the sleep data of 132 training and 131 testing subjects. With the studied algorithm, it was possible to reduce the sampling rate to 50 Hz (having a low pass filter at 90 Hz), and the dynamic range to 244 microV, with an 8 bit resolution resulting in a bit rate of 0.4 kbit/s. Facial electrodes and a low bit rate enables the use of smaller devices for sleep stage classification in home environments.

  9. Pediatric facial injuries: It's management.

    PubMed

    Singh, Geeta; Mohammad, Shadab; Pal, U S; Hariram; Malkunje, Laxman R; Singh, Nimisha

    2011-07-01

    Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention.

  10. Three-dimensional gender differences in facial form of children in the North East of England.

    PubMed

    Bugaighis, Iman; Mattick, Clare R; Tiddeman, Bernard; Hobson, Ross

    2013-06-01

    The aim of the prospective cross-sectional morphometric study was to explore three dimensional (3D) facial shape and form (shape plus size) variation within and between 8- and 12-year-old Caucasian children; 39 males age-matched with 41 females. The 3D images were captured using a stereophotogrammeteric system, and facial form was recorded by digitizing 39 anthropometric landmarks for each scan. The x, y, z coordinates of each landmark were extracted and used to calculate linear and angular measurements. 3D landmark asymmetry was quantified using Generalized Procrustes Analysis (GPA) and an average face was constructed for each gender. The average faces were superimposed and differences were visualized and quantified. Shape variations were explored using GPA and PrincipalComponent Analysis. Analysis of covariance and Pearson correlation coefficients were used to explore gender differences and to determine any correlation between facial measurements and height or weight. Multivariate analysis was used to ascertain differences in facial measurements or 3D landmark asymmetry. There were no differences in height or weight between genders. There was a significant positive correlation between facial measurements and height and weight and statistically significant differences in linear facial width measurements between genders. These differences were related to the larger size of males rather than differences in shape. There were no age- or gender-linked significant differences in 3D landmark asymmetry. Shape analysis confirmed similarities between both males and females for facial shape and form in 8- to 12-year-old children. Any differences found were related to differences in facial size rather than shape.

  11. Operant conditioning of facial displays of pain.

    PubMed

    Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan

    2011-06-01

    The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p < .001) but not in the yoked control group (p > .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.

  12. In the face of emotions: event-related potentials in supraliminal and subliminal facial expression recognition.

    PubMed

    Balconi, Michela; Lucchiari, Claudio

    2005-02-01

    Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.

  13. Facial Displays Are Tools for Social Influence.

    PubMed

    Crivelli, Carlos; Fridlund, Alan J

    2018-05-01

    Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Does Facial Resemblance Enhance Cooperation?

    PubMed Central

    Giang, Trang; Bell, Raoul; Buchner, Axel

    2012-01-01

    Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces). A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system. PMID:23094095

  15. Ocular Manifestations of Oblique Facial Clefts

    PubMed Central

    Ortube, Maria Carolina; Dipple, Katrina; Setoguchi, Yoshio; Kawamoto, Henry K.; Demer, Joseph L.

    2014-01-01

    Introduction In the Tessier classification, craniofacial clefts are numbered from 0 to 14 and extend along constant axes through the eyebrows, eyelids, maxilla, nostrils, and the lips. We studied a patient with bilateral cleft 10 associated with ocular abnormalities. Method Clinical report with orbital and cranial computed tomography. Results After pregnancy complicated by oligohydramnios, digoxin, and lisinopril exposure, a boy was born with facial and ocular dysmorphism. Examination at age 26 months showed bilateral epibulbar dermoids, covering half the corneal surface, and unilateral morning glory anomaly of the optic nerve. Ductions of the right eye were normal, but the left eye had severely impaired ductions in all directions, left hypotropia, and esotropia. Under anesthesia, the left eye could not be rotated freely in any direction. Bilateral Tessier cleft number 10 was implicated by the presence of colobomata of the middle third of the upper eyelids and eyebrows. As the cleft continued into the hairline, there was marked anterior scalp alopecia. Computed x-ray tomography showed a left middle cranial fossa arachnoid cyst and calcification of the reflected tendon of the superior oblique muscle, trochlea, and underlying sclera, with downward and lateral globe displacement. Discussion Tessier 10 clefts are very rare and usually associated with encephalocele. Bilateral 10 clefts have not been reported previously. In this case, there was coexisting unilateral morning glory anomaly and arachnoid cyst of the left middle cranial fossa but no encephalocele. Conclusions Bilateral Tessier facial cleft 10 may be associated with alopecia, morning glory anomaly, epibulbar dermoids, arachnoid cyst, and restrictive strabismus. PMID:20856062

  16. Facial nerve paralysis secondary to occult malignant neoplasms.

    PubMed

    Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J

    2004-04-01

    This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.

  17. Pediatric facial injuries: It's management

    PubMed Central

    Singh, Geeta; Mohammad, Shadab; Pal, U. S.; Hariram; Malkunje, Laxman R.; Singh, Nimisha

    2011-01-01

    Background: Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. Purpose: To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Materials and Methods: Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. Results and Conclusion: In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention. PMID:22639504

  18. Understanding recovery: changes in the relationships of the International Classification of Functioning (ICF) components over time.

    PubMed

    Davis, A M; Perruccio, A V; Ibrahim, S; Hogg-Johnson, S; Wong, R; Badley, E M

    2012-12-01

    The International Classification of Functioning, Disability and Health framework describes human functioning through body structure and function, activity and participation in the context of a person's social and physical environment. This work tested the temporal relationships of these components. Our hypotheses were: 1) there would be associations among physical impairment, activity limitations and participation restrictions within time; 2) prior status of a component would be associated with future status; 3) prior status of one component would influence status of a second component (e.g. prior activity limitations would be associated with current participation restrictions); and, 4) the magnitude of the within time relationships of the components would vary over time. Participants from Canada with primary hip or knee joint replacement (n = 931), an intervention with predictable improvement in pain and disability, completed standardized outcome measures pre-surgery and five times in the first year post-surgery. These included physical impairment (pain), activity limitations and participation restrictions. ICF component relationships were evaluated cross-sectionally and longitudinally using path analysis adjusting for age, sex, BMI, hip vs. knee, low back pain and mood. All component scores improved significantly over time. The path coefficients supported the hypotheses in that both within and across time, physical impairment was associated with activity limitation and activity limitation was associated with participation restriction; prior status and change in a component were associated with current status in another component; and, the magnitude of the path coefficients varied over time with stronger associations among components to three months post surgery than later in recovery with the exception of the association between impairment and participation restrictions which was of similar magnitude at all times. This work enhances understanding of the

  19. Effect of facial neuromuscular re-education on facial symmetry in patients with Bell's palsy: a randomized controlled trial.

    PubMed

    Manikandan, N

    2007-04-01

    To determine the effect of facial neuromuscular re-education over conventional therapeutic measures in improving facial symmetry in patients with Bell's palsy. Randomized controlled trial. Neurorehabilitation unit. Fifty-nine patients diagnosed with Bell's palsy were included in the study after they met the inclusion criteria. Patients were randomly divided into two groups: control (n = 30) and experimental (n = 29). Control group patients received conventional therapeutic measures while the facial neuromuscular re-education group patients received techniques that were tailored to each patient in three sessions per day for six days per week for a period of two weeks. All the patients were evaluated using a Facial Grading Scale before treatment and after three months. The Facial Grading Scale scores showed significant improvement in both control (mean 32 (range 9.7-54) to 54.5 (42.2-71.7)) and the experimental (33 (18-43.5) to 66 (54-76.7)) group. Facial Grading Scale change scores showed that experimental group (27.5 (20-43.77)) improved significantly more than the control group (16.5 (12.2-24.7)). Analysis of Facial Grading Scale subcomponents did not show statistical significance, except in the movement score (12 (8-16) to 24 (12-18)). Individualized facial neuromuscular re-education is more effective in improving facial symmetry in patients with Bell's palsy than conventional therapeutic measures.

  20. Automated and objective action coding of facial expressions in patients with acute facial palsy.

    PubMed

    Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando

    2015-05-01

    Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p < 0.0001). The final examination for patients took place 4 ± 6 months post baseline. The number of activated AUs and the ratio between affected and healthy side increased significantly between baseline and final examination (both p < 0.0001). The asymmetry score decreased between baseline and final examination (p < 0.0001). The number of activated AUs on the healthy side did not change significantly (p = 0.779). Radical rethinking in facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.

  1. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  2. Values of a Patient and Observer Scar Assessment Scale to Evaluate the Facial Skin Graft Scar

    PubMed Central

    Chae, Jin Kyung; Kim, Eun Jung; Park, Kun

    2016-01-01

    Background The patient and observer scar assessment scale (POSAS) recently emerged as a promising method, reflecting both observer's and patient's opinions in evaluating scar. This tool was shown to be consistent and reliable in burn scar assessment, but it has not been tested in the setting of skin graft scar in skin cancer patients. Objective To evaluate facial skin graft scar applied to POSAS and to compare with objective scar assessment tools. Methods Twenty three patients, who diagnosed with facial cutaneous malignancy and transplanted skin after Mohs micrographic surgery, were recruited. Observer assessment was performed by three independent rates using the observer component of the POSAS and Vancouver scar scale (VSS). Patient self-assessment was performed using the patient component of the POSAS. To quantify scar color and scar thickness more objectively, spectrophotometer and ultrasonography was applied. Results Inter-observer reliability was substantial with both VSS and the observer component of the POSAS (average measure intraclass coefficient correlation, 0.76 and 0.80, respectively). The observer component consistently showed significant correlations with patients' ratings for the parameters of the POSAS (all p-values<0.05). The correlation between subjective assessment using POSAS and objective assessment using spectrophotometer and ultrasonography showed low relationship. Conclusion In facial skin graft scar assessment in skin cancer patients, the POSAS showed acceptable inter-observer reliability. This tool was more comprehensive and had higher correlation with patient's opinion. PMID:27746642

  3. Perceived functional impact of abnormal facial appearance.

    PubMed

    Rankin, Marlene; Borah, Gregory L

    2003-06-01

    Functional facial deformities are usually described as those that impair respiration, eating, hearing, or speech. Yet facial scars and cutaneous deformities have a significant negative effect on social functionality that has been poorly documented in the scientific literature. Insurance companies are declining payments for reconstructive surgical procedures for facial deformities caused by congenital disabilities and after cancer or trauma operations that do not affect mechanical facial activity. The purpose of this study was to establish a large, sample-based evaluation of the perceived social functioning, interpersonal characteristics, and employability indices for a range of facial appearances (normal and abnormal). Adult volunteer evaluators (n = 210) provided their subjective perceptions based on facial physical appearance, and an analysis of the consequences of facial deformity on parameters of preferential treatment was performed. A two-group comparative research design rated the differences among 10 examples of digitally altered facial photographs of actual patients among various age and ethnic groups with "normal" and "abnormal" congenital deformities or posttrauma scars. Photographs of adult patients with observable congenital and posttraumatic deformities (abnormal) were digitally retouched to eliminate the stigmatic defects (normal). The normal and abnormal photographs of identical patients were evaluated by the large sample study group on nine parameters of social functioning, such as honesty, employability, attractiveness, and effectiveness, using a visual analogue rating scale. Patients with abnormal facial characteristics were rated as significantly less honest (p = 0.007), less employable (p = 0.001), less trustworthy (p = 0.01), less optimistic (p = 0.001), less effective (p = 0.02), less capable (p = 0.002), less intelligent (p = 0.03), less popular (p = 0.001), and less attractive (p = 0.001) than were the same patients with normal facial

  4. Three-dimensional analysis of facial morphology.

    PubMed

    Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng

    2014-09-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.

  5. [Surgical treatment in otogenic facial nerve palsy].

    PubMed

    Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng

    2008-06-01

    To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.

  6. Evolution of middle-late Pleistocene human cranio-facial form: a 3-D approach.

    PubMed

    Harvati, Katerina; Hublin, Jean-Jacques; Gunz, Philipp

    2010-11-01

    The classification and phylogenetic relationships of the middle Pleistocene human fossil record remains one of the most intractable problems in paleoanthropology. Several authors have noted broad resemblances between European and African fossils from this period, suggesting a single taxon ancestral to both modern humans and Neanderthals. Others point out 'incipient' Neanderthal features in the morphology of the European sample and have argued for their inclusion in the Neanderthal lineage exclusively, following a model of accretionary evolution of Neanderthals. We approach these questions using geometric morphometric methods which allow the intuitive visualization and quantification of features previously described qualitatively. We apply these techniques to evaluate proposed cranio-facial 'incipient' facial, vault, and basicranial traits in a middle-late Pleistocene European hominin sample when compared to a sample of the same time depth from Africa. Some of the features examined followed the predictions of the accretion model and relate the middle Pleistocene European material to the later Neanderthals. However, although our analysis showed a clear separation between Neanderthals and early/recent modern humans and morphological proximity between European specimens from OIS 7 to 3, it also shows that the European hominins from the first half of the middle Pleistocene still shared most of their cranio-facial architecture with their African contemporaries. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Constriction of the buccal branch of the facial nerve produces unilateral craniofacial allodynia.

    PubMed

    Lewis, Susannah S; Grace, Peter M; Hutchinson, Mark R; Maier, Steven F; Watkins, Linda R

    2017-08-01

    Despite pain being a sensory experience, studies of spinal cord ventral root damage have demonstrated that motor neuron injury can induce neuropathic pain. Whether injury of cranial motor nerves can also produce nociceptive hypersensitivity has not been addressed. Herein, we demonstrate that chronic constriction injury (CCI) of the buccal branch of the facial nerve results in long-lasting, unilateral allodynia in the rat. An anterograde and retrograde tracer (3000MW tetramethylrhodamine-conjugated dextran) was not transported to the trigeminal ganglion when applied to the injury site, but was transported to the facial nucleus, indicating that this nerve branch is not composed of trigeminal sensory neurons. Finally, intracisterna magna injection of interleukin-1 (IL-1) receptor antagonist reversed allodynia, implicating the pro-inflammatory cytokine IL-1 in the maintenance of neuropathic pain induced by facial nerve CCI. These data extend the prior evidence that selective injury to motor axons can enhance pain to supraspinal circuits by demonstrating that injury of a facial nerve with predominantly motor axons is sufficient for neuropathic pain, and that the resultant pain has a neuroimmune component. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Effects of task demands on the early neural processing of fearful and happy facial expressions.

    PubMed

    Itier, Roxane J; Neath-Tavares, Karly N

    2017-05-15

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification.

    PubMed

    Ping-Keng Jao; Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung

    2015-08-01

    An emerging challenge for emotion classification using electroencephalography (EEG) is how to effectively alleviate day-to-day variability in raw data. This study employed the robust principal component analysis (RPCA) to address the problem with a posed hypothesis that background or emotion-irrelevant EEG perturbations lead to certain variability across days and somehow submerge emotion-related EEG dynamics. The empirical results of this study evidently validated our hypothesis and demonstrated the RPCA's feasibility through the analysis of a five-day dataset of 12 subjects. The RPCA allowed tackling the sparse emotion-relevant EEG dynamics from the accompanied background perturbations across days. Sequentially, leveraging the RPCA-purified EEG trials from more days appeared to improve the emotion-classification performance steadily, which was not found in the case using the raw EEG features. Therefore, incorporating the RPCA with existing emotion-aware machine-learning frameworks on a longitudinal dataset of each individual may shed light on the development of a robust affective brain-computer interface (ABCI) that can alleviate ecological inter-day variability.

  10. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  11. An Assessment of How Facial Mimicry Can Change Facial Morphology: Implications for Identification.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    The assessment of facial mimicry is important in forensic anthropology; in addition, the application of modern 3D image acquisition systems may help for the analysis of facial surfaces. This study aimed at exposing a novel method for comparing 3D profiles in different facial expressions. Ten male adults, aged between 30 and 40 years, underwent acquisitions by stereophotogrammetry (VECTRA-3D ® ) with different expressions (neutral, happy, sad, angry, surprised). The acquisition of each individual was then superimposed on the neutral one according to nine landmarks, and the root mean square (RMS) value between the two expressions was calculated. The highest difference in comparison with the neutral standard was shown by the happy expression (RMS 4.11 mm), followed by the surprised (RMS 2.74 mm), sad (RMS 1.3 mm), and angry ones (RMS 1.21 mm). This pilot study shows that the 3D-3D superimposition may provide reliable results concerning facial alteration due to mimicry. © 2016 American Academy of Forensic Sciences.

  12. Acneiform facial eruptions

    PubMed Central

    Cheung, Melody J.; Taher, Muba; Lauzon, Gilles J.

    2005-01-01

    OBJECTIVE To summarize clinical recognition and current management strategies for four types of acneiform facial eruptions common in young women: acne vulgaris, rosacea, folliculitis, and perioral dermatitis. QUALITY OF EVIDENCE Many randomized controlled trials (level I evidence) have studied treatments for acne vulgaris over the years. Treatment recommendations for rosacea, folliculitis, and perioral dermatitis are based predominantly on comparison and open-label studies (level II evidence) as well as expert opinion and consensus statements (level III evidence). MAIN MESSAGE Young women with acneiform facial eruptions often present in primary care. Differentiating between morphologically similar conditions is often difficult. Accurate diagnosis is important because treatment approaches are different for each disease. CONCLUSION Careful visual assessment with an appreciation for subtle morphologic differences and associated clinical factors will help with diagnosis of these common acneiform facial eruptions and lead to appropriate management. PMID:15856972

  13. Linear measurements of the neurocranium are better indicators of population differences than those of the facial skeleton: comparative study of 1,961 skulls.

    PubMed

    Holló, Gábor; Szathmáry, László; Marcsik, Antónia; Barta, Zoltán

    2010-02-01

    The aim of this study is to individualize potential differences between two cranial regions used to differentiate human populations. We compared the neurocranium and the facial skeleton using skulls from the Great Hungarian Plain. The skulls date to the 1st-11th centuries, a long space of time that encompasses seven archaeological periods. We analyzed six neurocranial and seven facial measurements. The reduction of the number of variables was carried out using principal components analysis. Linear mixed-effects models were fitted to the principal components of each archaeological period, and then the models were compared using multiple pairwise tests. The neurocranium showed significant differences in seven cases between nonsubsequent periods and in one case, between two subsequent populations. For the facial skeleton, no significant results were found. Our results, which are also compared to previous craniofacial heritability estimates, suggest that the neurocranium is a more conservative region and that population differences can be pointed out better in the neurocranium than in the facial skeleton.

  14. Early and late temporo-spatial effects of contextual interference during perception of facial affect.

    PubMed

    Frühholz, Sascha; Fehr, Thorsten; Herrmann, Manfred

    2009-10-01

    Contextual features during recognition of facial affect are assumed to modulate the temporal course of emotional face processing. Here, we simultaneously presented colored backgrounds during valence categorizations of facial expressions. Subjects incidentally learned to perceive negative, neutral and positive expressions within a specific colored context. Subsequently, subjects made fast valence judgments while presented with the same face-color-combinations as in the first run (congruent trials) or with different face-color-combinations (incongruent trials). Incongruent trials induced significantly increased response latencies and significantly decreased performance accuracy. Contextual incongruent information during processing of neutral expressions modulated the P1 and the early posterior negativity (EPN) both localized in occipito-temporal areas. Contextual congruent information during emotional face perception revealed an emotion-related modulation of the P1 for positive expressions and of the N170 and the EPN for negative expressions. Highest amplitude of the N170 was found for negative expressions in a negatively associated context and the N170 amplitude varied with the amount of overall negative information. Incongruent trials with negative expressions elicited a parietal negativity which was localized to superior parietal cortex and which most likely represents a posterior manifestation of the N450 as an indicator of conflict processing. A sustained activation of the late LPP over parietal cortex for all incongruent trials might reflect enhanced engagement with facial expression during task conditions of contextual interference. In conclusion, whereas early components seem to be sensitive to the emotional valence of facial expression in specific contexts, late components seem to subserve interference resolution during emotional face processing.

  15. A classification of components of workplace disability management programs: results from a systematic review.

    PubMed

    Gensby, U; Labriola, M; Irvin, E; Amick, B C; Lund, T

    2014-06-01

    This paper presents results from a Campbell systematic review on the nature and effectiveness of workplace disability management programs (WPDM) promoting return to work (RTW), as implemented and practiced by employers. A classification of WPDM program components, based on the review results, is proposed. Twelve databases were searched between 1948 to July 2010 for peer-reviewed studies of WPDM programs provided by employers to re-entering workers with occupational or non-occupational illnesses or injuries. Screening of articles, risk of bias assessment and data extraction were conducted in pairs of reviewers. Studies were clustered around various dimensions of the design and context of programs. 16,932 records were identified by the initial search. 599 papers were assessed for relevance. Thirteen studies met inclusion criteria. Twelve peer reviewed articles (two non-randomized studies, and ten single group experimental before and after studies), including ten different WPDM programs informed the synthesis of results. Narrative descriptions of the included program characteristics provided insight on program scope, components, procedures and human resources involved. However, there were insufficient data on the characteristics of the sample and the effect sizes were uncertain. A taxonomy classifying policies and practices around WPDM programs is proposed. There is insufficient evidence to draw conclusions on the effectiveness of employer provided WPDM programs promoting RTW. It was not possible to determine if specific program components or specific sets of components are driving effectiveness. The proposed taxonomy may guide future WPDM program evaluation and clarify the setup of programs offered to identify gaps in existing company strategies.

  16. Early adverse experiences and the neurobiology of facial emotion processing.

    PubMed

    Moulson, Margaret C; Fox, Nathan A; Zeanah, Charles H; Nelson, Charles A

    2009-01-01

    To examine the neurobiological consequences of early institutionalization, the authors recorded event-related potentials (ERPs) from 3 groups of Romanian children--currently institutionalized, previously institutionalized but randomly assigned to foster care, and family-reared children--in response to pictures of happy, angry, fearful, and sad facial expressions of emotion. At 3 assessments (baseline, 30 months, and 42 months), institutionalized children showed markedly smaller amplitudes and longer latencies for the occipital components P1, N170, and P400 compared to family-reared children. By 42 months, ERP amplitudes and latencies of children placed in foster care were intermediate between the institutionalized and family-reared children, suggesting that foster care may be partially effective in ameliorating adverse neural changes caused by institutionalization. The age at which children were placed into foster care was unrelated to their ERP outcomes at 42 months. Facial emotion processing was similar in all 3 groups of children; specifically, fearful faces elicited larger amplitude and longer latency responses than happy faces for the frontocentral components P250 and Nc. These results have important implications for understanding of the role that experience plays in shaping the developing brain.

  17. The effect of forced choice on facial emotion recognition: a comparison to open verbal classification of emotion labels

    PubMed Central

    Limbrecht-Ecklundt, Kerstin; Scheck, Andreas; Jerg-Bretzke, Lucia; Walter, Steffen; Hoffmann, Holger; Traue, Harald C.

    2013-01-01

    Objective: This article includes the examination of potential methodological problems of the application of a forced choice response format in facial emotion recognition. Methodology: 33 subjects were presented with validated facial stimuli. The task was to make a decision about which emotion was shown. In addition, the subjective certainty concerning the decision was recorded. Results: The detection rates are 68% for fear, 81% for sadness, 85% for anger, 87% for surprise, 88% for disgust, and 94% for happiness, and are thus well above the random probability. Conclusion: This study refutes the concern that the use of forced choice formats may not adequately reflect actual recognition performance. The use of standardized tests to examine emotion recognition ability leads to valid results and can be used in different contexts. For example, the images presented here appear suitable for diagnosing deficits in emotion recognition in the context of psychological disorders and for mapping treatment progress. PMID:23798981

  18. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  19. Social Use of Facial Expressions in Hylobatids

    PubMed Central

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  20. Facial Anthropometric Norms among Kosovo - Albanian Adults.

    PubMed

    Staka, Gloria; Asllani-Hoxha, Flurije; Bimbashi, Venera

    2017-09-01

    The development of an anthropometric craniofacial database is a necessary multidisciplinary proposal. The aim of this study was to establish facial anthropometric norms and to investigate into sexual dimorphism in facial variables among Kosovo Albanian adults. The sample included 204 students of Dental School, Faculty of Medicine, University of Pristina. Using direct anthropometry, a series of 8 standard facial measurements was taken on each subject with digital caliper with an accuracy of 0.01 mm (Boss, Hamburg-Germany). The normative data and percentile rankings were calculated. Gender differences in facial variables were analyzed using t- test for independent samples (p<0.05). The index of sexual dimorphism (ISD) and percentage of sexual dimorphism were calculated for each facial measurement. ormative data for all facial anthropometric measurements in males were higher than in females. Male average norms compared with the female average norms differed significantly from each other (p>0.05).The highest index of sexual dimorphism (ISD) was found for the lower facial height 1.120, for which the highest percentage of sexual dimorphism, 12.01%., was also found. The lowest ISD was found for intercanthal width, 1.022, accompanied with the lowest percentage of sexual dimorphism, 2.23%. The obtained results have established the facial anthropometric norms among Kosovo Albanian adults. Sexual dimorphism has been confirmed for each facial measurement.

  1. Facial Specialty. Teacher Edition. Cosmetology Series.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This publication is one of a series of curriculum guides designed to direct and support instruction in vocational cosmetology programs in the State of Oklahoma. It contains seven units for the facial specialty: identifying enemies of the skin, using aromatherapy on the skin, giving facials without the aid of machines, giving facials with the aid…

  2. Influence of gravity upon some facial signs.

    PubMed

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Fran

  3. [Idiopathic facial paralysis in children].

    PubMed

    Achour, I; Chakroun, A; Ayedi, S; Ben Rhaiem, Z; Mnejja, M; Charfeddine, I; Hammami, B; Ghorbel, A

    2015-05-01

    Idiopathic facial palsy is the most common cause of facial nerve palsy in children. Controversy exists regarding treatment options. The objectives of this study were to review the epidemiological and clinical characteristics as well as the outcome of idiopathic facial palsy in children to suggest appropriate treatment. A retrospective study was conducted on children with a diagnosis of idiopathic facial palsy from 2007 to 2012. A total of 37 cases (13 males, 24 females) with a mean age of 13.9 years were included in this analysis. The mean duration between onset of Bell's palsy and consultation was 3 days. Of these patients, 78.3% had moderately severe (grade IV) or severe paralysis (grade V on the House and Brackmann grading). Twenty-seven patients were treated in an outpatient context, three patients were hospitalized, and seven patients were treated as outpatients and subsequently hospitalized. All patients received corticosteroids. Eight of them also received antiviral treatment. The complete recovery rate was 94.6% (35/37). The duration of complete recovery was 7.4 weeks. Children with idiopathic facial palsy have a very good prognosis. The complete recovery rate exceeds 90%. However, controversy exists regarding treatment options. High-quality studies have been conducted on adult populations. Medical treatment based on corticosteroids alone or combined with antiviral treatment is certainly effective in improving facial function outcomes in adults. In children, the recommendation for prescription of steroids and antiviral drugs based on adult treatment appears to be justified. Randomized controlled trials in the pediatric population are recommended to define a strategy for management of idiopathic facial paralysis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  4. Sutural growth restriction and modern human facial evolution: an experimental study in a pig model

    PubMed Central

    Holton, Nathan E; Franciscus, Robert G; Nieves, Mary Ann; Marshall, Steven D; Reimer, Steven B; Southard, Thomas E; Keller, John C; Maddux, Scott D

    2010-01-01

    Facial size reduction and facial retraction are key features that distinguish modern humans from archaic Homo. In order to more fully understand the emergence of modern human craniofacial form, it is necessary to understand the underlying evolutionary basis for these defining characteristics. Although it is well established that the cranial base exerts considerable influence on the evolutionary and ontogenetic development of facial form, less emphasis has been placed on developmental factors intrinsic to the facial skeleton proper. The present analysis was designed to assess anteroposterior facial reduction in a pig model and to examine the potential role that this dynamic has played in the evolution of modern human facial form. Ten female sibship cohorts, each consisting of three individuals, were allocated to one of three groups. In the experimental group (n = 10), microplates were affixed bilaterally across the zygomaticomaxillary and frontonasomaxillary sutures at 2 months of age. The sham group (n = 10) received only screw implantation and the controls (n = 10) underwent no surgery. Following 4 months of post-surgical growth, we assessed variation in facial form using linear measurements and principal components analysis of Procrustes scaled landmarks. There were no differences between the control and sham groups; however, the experimental group exhibited a highly significant reduction in facial projection and overall size. These changes were associated with significant differences in the infraorbital region of the experimental group including the presence of an infraorbital depression and an inferiorly and coronally oriented infraorbital plane in contrast to a flat, superiorly and sagittally infraorbital plane in the control and sham groups. These altered configurations are markedly similar to important additional facial features that differentiate modern humans from archaic Homo, and suggest that facial length restriction via rigid plate fixation is a

  5. Reconstruction of facial nerve injuries in children.

    PubMed

    Fattah, Adel; Borschel, Gregory H; Zuker, Ron M

    2011-05-01

    Facial nerve trauma is uncommon in children, and many spontaneously recover some function; nonetheless, loss of facial nerve activity leads to functional impairment of ocular and oral sphincters and nasal orifice. In many cases, the impediment posed by facial asymmetry and reduced mimetic function more significantly affects the child's psychosocial interactions. As such, reconstruction of the facial nerve affords great benefits in quality of life. The therapeutic strategy is dependent on numerous factors, including the cause of facial nerve injury, the deficit, the prognosis for recovery, and the time elapsed since the injury. The options for treatment include a diverse range of surgical techniques including static lifts and slings, nerve repairs, nerve grafts and nerve transfers, regional, and microvascular free muscle transfer. We review our strategies for addressing facial nerve injuries in children.

  6. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  7. Contemporary solutions for the treatment of facial nerve paralysis.

    PubMed

    Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R

    2015-06-01

    After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.

  8. Improving posttraumatic facial scars.

    PubMed

    Ardeshirpour, Farhad; Shaye, David A; Hilger, Peter A

    2013-10-01

    Posttraumatic soft-tissue injuries of the face are often the most lasting sequelae of facial trauma. The disfigurement of posttraumatic scarring lies in both their physical deformity and psychosocial ramifications. This review outlines a variety of techniques to improve facial scars and limit their lasting effects. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Reverse correlating love: highly passionate women idealize their partner's facial appearance.

    PubMed

    Gunaydin, Gul; DeLong, Jordan E

    2015-01-01

    A defining feature of passionate love is idealization--evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner's facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships.

  10. Facial Animations: Future Research Directions & Challenges

    NASA Astrophysics Data System (ADS)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  11. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  12. The identification of unfolding facial expressions.

    PubMed

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  13. Automatic classification of retinal three-dimensional optical coherence tomography images using principal component analysis network with composite kernels

    NASA Astrophysics Data System (ADS)

    Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein

    2017-11-01

    We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.

  14. Reconstruction of facial nerve after radical parotidectomy.

    PubMed

    Renkonen, Suvi; Sayed, Farid; Keski-Säntti, Harri; Ylä-Kotola, Tuija; Bäck, Leif; Suominen, Sinikka; Kanerva, Mervi; Mäkitie, Antti A

    2015-01-01

    Most patients benefitted from immediate facial nerve grafting after radical parotidectomy. Even weak movement is valuable and can be augmented with secondary static operations. Post-operative radiotherapy does not seem to affect the final outcome of facial function. During radical parotidectomy, the sacrifice of the facial nerve results in severe disfigurement of the face. Data on the principles and outcome of facial nerve reconstruction and reanimation after radical parotidectomy are limited and no consensus exists on the best practice. This study retrospectively reviewed all patients having undergone radical parotidectomy and immediate facial nerve reconstruction with a free, non-vascularized nerve graft at the Helsinki University Hospital, Helsinki, Finland during the years 1990-2010. There were 31 patients (18 male; mean age = 54.7 years; range = 30-82) and 23 of them had a sufficient follow-up time. Facial nerve function recovery was seen in 18 (78%) of the 23 patients with a minimum of 2-year follow-up and adequate reporting available. Only slight facial movement was observed in five (22%), moderate or good movement in nine (39%), and excellent movement in four (17%) patients. Twenty-two (74%) patients received post-operative radiotherapy and 16 (70%) of them had some recovery of facial nerve function. Nineteen (61%) patients needed secondary static reanimation of the face.

  15. Modeling 3D Facial Shape from DNA

    PubMed Central

    Claes, Peter; Liberton, Denise K.; Daniels, Katleen; Rosana, Kerri Matthes; Quillen, Ellen E.; Pearson, Laurel N.; McEvoy, Brian; Bauchet, Marc; Zaidi, Arslan A.; Yao, Wei; Tang, Hua; Barsh, Gregory S.; Absher, Devin M.; Puts, David A.; Rocha, Jorge; Beleza, Sandra; Pereira, Rinaldo W.; Baynam, Gareth; Suetens, Paul; Vandermeulen, Dirk; Wagner, Jennifer K.; Boster, James S.; Shriver, Mark D.

    2014-01-01

    Human facial diversity is substantial, complex, and largely scientifically unexplained. We used spatially dense quasi-landmarks to measure face shape in population samples with mixed West African and European ancestry from three locations (United States, Brazil, and Cape Verde). Using bootstrapped response-based imputation modeling (BRIM), we uncover the relationships between facial variation and the effects of sex, genomic ancestry, and a subset of craniofacial candidate genes. The facial effects of these variables are summarized as response-based imputed predictor (RIP) variables, which are validated using self-reported sex, genomic ancestry, and observer-based facial ratings (femininity and proportional ancestry) and judgments (sex and population group). By jointly modeling sex, genomic ancestry, and genotype, the independent effects of particular alleles on facial features can be uncovered. Results on a set of 20 genes showing significant effects on facial features provide support for this approach as a novel means to identify genes affecting normal-range facial features and for approximating the appearance of a face from genetic markers. PMID:24651127

  16. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  17. How to Avoid Facial Nerve Injury in Mastoidectomy?

    PubMed Central

    Ryu, Nam-Gyu

    2016-01-01

    Unexpected iatrogenic facial nerve paralysis not only affects facial disfiguration, but also imposes a devastating effect on the social, psychological, and economic aspects of an affected person's life at once. The aims of this study were to postulate where surgeons had mistakenly drilled or where obscured by granulations or by fibrous bands and to look for surgical approach with focused on the safety of facial nerve in mastoid surgery. We had found 14 cases of iatrogenic facial nerve injury (IFNI) during mastoid surgery for 5 years in Korea. The medical records of all the patients were obtained and analyzed injured site of facial nerve segment with surgical technique of mastoidectomy. Eleven patients underwent facial nerve exploration and three patients had conservative management. 43% (6 cases) of iatrogenic facial nerve injuries had occurred in tympanic segment, 28.5% (4 cases) of injuries in second genu combined with tympanic segment, and 28.5% (4 cases) of injuries in mastoid segment. Surgeons should try to identify the facial nerve using available landmarks and be kept in mind the anomalies of the facial nerve. With use of intraoperative facial nerve monitoring, the avoidance of in order to avoid IFNI would be possible in more cases. Many authors emphasized the importance of intraoperative facial nerve monitoring, even in primary otologic surgery. However, anatomical understanding of intratemporal landmarks with meticulous dissection could not be emphasized as possible to prevent IFNI. PMID:27626078

  18. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  19. Preservation of Facial Nerve Function Repaired by Using Fibrin Glue-Coated Collagen Fleece for a Totally Transected Facial Nerve during Vestibular Schwannoma Surgery

    PubMed Central

    Choi, Kyung-Sik; Kim, Min-Su; Jang, Sung-Ho

    2014-01-01

    Recently, the increasing rates of facial nerve preservation after vestibular schwannoma (VS) surgery have been achieved. However, the management of a partially or completely damaged facial nerve remains an important issue. The authors report a patient who was had a good recovery after a facial nerve reconstruction using fibrin glue-coated collagen fleece for a totally transected facial nerve during VS surgery. And, we verifed the anatomical preservation and functional outcome of the facial nerve with postoperative diffusion tensor (DT) imaging facial nerve tractography, electroneurography (ENoG) and House-Brackmann (HB) grade. DT imaging tractography at the 3rd postoperative day revealed preservation of facial nerve. And facial nerve degeneration ratio was 94.1% at 7th postoperative day ENoG. At postoperative 3 months and 1 year follow-up examination with DT imaging facial nerve tractography and ENoG, good results for facial nerve function were observed. PMID:25024825

  20. Health professionals identify components of the International Classification of Functioning, Disability and Health (ICF) in questionnaires for the upper limb

    PubMed Central

    Philbois, Stella V.; Martins, Jaqueline; Souza, Cesário S.; Sampaio, Rosana F.; Oliveira, Anamaria S.

    2016-01-01

    BACKGROUND: Several Brazilian studies have addressed the International Classification of Functioning, Disability and Health (ICF), but few have analyzed the knowledge of the health professionals with regards to the ICF. OBJECTIVE: To verify whether the classification of the items in the Brazilian-Portuguese versions of The Shoulder Pain and Disability Index (SPADI) and The Disabilities Arm, Shoulder and Hand (DASH) questionnaires, obtained from health professionals who worked with patients having upper limb injuries, could be related to ICF components as defined by others studies. METHOD: There were 4 participants for the group "professionals with high familiarity of the ICF (PHF)" and 19 for the group of "professionals with some or no familiarity of the ICF (PSNF)". The participants judged whether the items on the two questionnaires belonged to the ICF body function, body structure or activity-participation component, and marked a confidence level for each trial using a numerical scale ranging from zero to 10. The items were classified by the discriminant content validity method using the Student'st-test and the Hochberg correction. The ratings were compared to the literature by the percentage of agreement and Kappa coefficient. RESULTS: The percentage of agreement of the rating from the PSNF and the PHF groups with the literature was equal to or greater than 77%. For the DASH, the agreement of the PSNF and PHF groups with the literature were, respectively, moderate (Kappa=0.46 to 0.48) and substantial (Kappa=0.62 to 0.70). CONCLUSIONS: Health professionals were able to correlate the three components of the ICF for most items on the 2 questionnaires, demonstrating some ease of understanding the ICF components. However, the relation of concept of pain with body function component is not clear for professional and deserves a more attentive approach. PMID:26786076

  1. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy.

    PubMed

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-10-01

    Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead.

  2. The superficial temporal fat pad and its ramifications for temporalis muscle construction in facial approximation.

    PubMed

    Stephan, Carl N; Devine, Matthew

    2009-10-30

    The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.

  3. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    ERIC Educational Resources Information Center

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  4. Fractional CO2 laser resurfacing of photoaged facial and non-facial skin: histologic and clinical results and side effects.

    PubMed

    Sasaki, Gordon H; Travis, Heather M; Tucker, Barbara

    2009-12-01

    CO(2) fractional ablation offers the potential for facial and non-facial skin resurfacing with minimal downtime and rapid recovery. The purpose of this study was (i) to document the average depths and density of adnexal structures in non-lasered facial and non-facial body skin; (ii) to determine injury in ex vivo human thigh skin with varying fractional laser modes; and (iii) to evaluate the clinical safety and efficacy of treatments. Histologies were obtained from non-lasered facial and non-facial skin from 121 patients and from 14 samples of excised lasered thigh skin. Seventy-one patients were evaluated after varying energy (mJ) and density settings by superficial ablation, deeper penetration, and combined treatment. Skin thickness and adnexal density in non-lasered skin exhibited variable ranges: epidermis (47-105 mum); papillary dermis (61-105 mum); reticular dermis (983-1986 mum); hair follicles (2-14/ HPF); sebaceous glands (2-23/HPF); sweat glands (2-7/HPF). Histological studies of samples from human thigh skin demonstrated that increased fluencies in the superficial, deep and combined mode resulted in predictable deeper levels of ablations and thermal injury. An increase in density settings results in total ablation of the epidermis. Clinical improvement of rhytids and pigmentations in facial and non-facial skin was proportional to increasing energy and density settings. Patient assessments and clinical gradings by the Wilcoxon's test of outcomes correlated with more aggressive settings. Prior knowledge of normal skin depths and adnexal densities, as well as ex vivo skin laser-injury profiles at varying fluencies and densities, improve the safety and efficiency of fractional CO(2) for photorejuvenation of facial and non-facial skin.

  5. Facial nerve paralysis in children

    PubMed Central

    Ciorba, Andrea; Corazzi, Virginia; Conz, Veronica; Bianchini, Chiara; Aimoni, Claudia

    2015-01-01

    Facial nerve palsy is a condition with several implications, particularly when occurring in childhood. It represents a serious clinical problem as it causes significant concerns in doctors because of its etiology, its treatment options and its outcome, as well as in little patients and their parents, because of functional and aesthetic outcomes. There are several described causes of facial nerve paralysis in children, as it can be congenital (due to delivery traumas and genetic or malformative diseases) or acquired (due to infective, inflammatory, neoplastic, traumatic or iatrogenic causes). Nonetheless, in approximately 40%-75% of the cases, the cause of unilateral facial paralysis still remains idiopathic. A careful diagnostic workout and differential diagnosis are particularly recommended in case of pediatric facial nerve palsy, in order to establish the most appropriate treatment, as the therapeutic approach differs in relation to the etiology. PMID:26677445

  6. Enhanced Facial Symmetry Assessment in Orthodontists

    PubMed Central

    Jackson, Tate H.; Clark, Kait; Mitroff, Stephen R.

    2013-01-01

    Assessing facial symmetry is an evolutionarily important process, which suggests that individual differences in this ability should exist. As existing data are inconclusive, the current study explored whether a group trained in facial symmetry assessment, orthodontists, possessed enhanced abilities. Symmetry assessment was measured using face and non-face stimuli among orthodontic residents and two control groups: university participants with no symmetry training and airport security luggage screeners, a group previously shown to possess expert visual search skills unrelated to facial symmetry. Orthodontic residents were more accurate at assessing symmetry in both upright and inverted faces compared to both control groups, but not for non-face stimuli. These differences are not likely due to motivational biases or a speed-accuracy tradeoff—orthodontic residents were slower than the university participants but not the security screeners. Understanding such individual differences in facial symmetry assessment may inform the perception of facial attractiveness. PMID:24319342

  7. Association of Frontal and Lateral Facial Attractiveness.

    PubMed

    Gu, Jeffrey T; Avilla, David; Devcic, Zlatko; Karimi, Koohyar; Wong, Brian J F

    2018-01-01

    Despite the large number of studies focused on defining frontal or lateral facial attractiveness, no reports have examined whether a significant association between frontal and lateral facial attractiveness exists. To examine the association between frontal and lateral facial attractiveness and to identify anatomical features that may influence discordance between frontal and lateral facial beauty. Paired frontal and lateral facial synthetic images of 240 white women (age range, 18-25 years) were evaluated from September 30, 2004, to September 29, 2008, using an internet-based focus group (n = 600) on an attractiveness Likert scale of 1 to 10, with 1 being least attractive and 10 being most attractive. Data analysis was performed from December 6, 2016, to March 30, 2017. The association between frontal and lateral attractiveness scores was determined using linear regression. Outliers were defined as data outside the 95% individual prediction interval. To identify features that contribute to score discordance between frontal and lateral attractiveness scores, each of these image pairs were scrutinized by an evaluator panel for facial features that were present in the frontal or lateral projections and absent in the other respective facial projections. Attractiveness scores obtained from internet-based focus groups. For the 240 white women studied (mean [SD] age, 21.4 [2.2] years), attractiveness scores ranged from 3.4 to 9.5 for frontal images and 3.3 to 9.4 for lateral images. The mean (SD) frontal attractiveness score was 6.9 (1.4), whereas the mean (SD) lateral attractiveness score was 6.4 (1.3). Simple linear regression of frontal and lateral attractiveness scores resulted in a coefficient of determination of r2 = 0.749. Eight outlier pairs were identified and analyzed by panel evaluation. Panel evaluation revealed no clinically applicable association between frontal and lateral images among outliers; however, contributory facial features were suggested

  8. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  9. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  10. Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study

    PubMed Central

    Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  11. Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.

    PubMed

    Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.

  12. Compensation procedures for facial asymmetries.

    PubMed

    Kozol, F

    1995-01-01

    Why would a patient complain of "fuzzy and uncomfortable" vision with a variety of glasses? Perhaps because the practitioner has failed to take facial asymmetry into account. Methods of measuring facial asymmetry and optically correcting for it are discussed.

  13. Facial gunshot wound debridement: debridement of facial soft tissue gunshot wounds.

    PubMed

    Shvyrkov, Michael B

    2013-01-01

    Over the period 1981-1985 the author treated 1486 patients with facial gunshot wounds sustained in combat in Afghanistan. In the last quarter of 20th century, more powerful and destructive weapons such as M-16 rifles, AK-47 and Kalashnikov submachine guns, became available and a new approach to gunshot wound debridement is required. Modern surgeons have little experience in treatment of such wounds because of rare contact with similar pathology. This article is intended to explore modern wound debridement. The management of 502 isolated soft tissue injuries is presented. Existing principles recommend the sparing of damaged tissues. The author's experience was that tissue sparing lead to a high rate of complications (47.6%). Radical primary surgical debridement (RPSD) of wounds was then adopted with radical excision of necrotic non-viable wound margins containing infection to the point of active capillary bleeding and immediate primary wound closure. After radical debridement wound infection and breakdown decreased by a factor of 10. Plastic operations with local and remote soft tissue were made on 14, 7% of the wounded. Only 0.7% patients required discharge from the army due to facial muscle paralysis and/or facial skin impregnation with particles of gunpowder from mine explosions. Gunshot face wound; modern debridement. Copyright © 2012 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  14. Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials.

    PubMed

    Zhang, Yan; Wei, Bin; Zhao, Peiqiong; Zheng, Minxiao; Zhang, Lili

    2016-06-01

    High rates of agreement in the judgment of facial attractiveness suggest universal principles of beauty. This study investigated gender differences in recognition memory processing of female facial attractiveness. Thirty-four Chinese heterosexual participants (17 females, 17 males) aged 18-24 years (mean age 21.63 ± 1.51 years) participated in the experiment which used event-related potentials (ERPs) based on a study-test paradigm. The behavioral data results showed that both men and women had significantly higher accuracy rates for attractive faces than for unattractive faces, but men reacted faster to unattractive faces. Gender differences on ERPs showed that attractive faces elicited larger early components such as P1, N170, and P2 in men than in women. The results indicated that the effects of recognition bias during memory processing modulated by female facial attractiveness are greater for men than women. Behavioral and ERP evidences indicate that men and women differ in their attentional adhesion to attractive female faces; different mating-related motives may guide the selective processing of attractive men and women. These findings establish a contribution of gender differences on female facial attractiveness during memory processing from an evolutionary perspective.

  15. Diagnosis and surgical outcomes of intraparotid facial nerve schwannoma showing normal facial nerve function.

    PubMed

    Lee, D W; Byeon, H K; Chung, H P; Choi, E C; Kim, S-H; Park, Y M

    2013-07-01

    The findings of intraparotid facial nerve schwannoma (FNS) using preoperative diagnostic tools, including ultrasonography (US)-guided fine needle aspiration biopsy, computed tomography (CT) scan, and magnetic resonance imaging (MRI), were analyzed to determine if there are any useful findings that might suggest the presence of a lesion. Treatment guidelines are suggested. The medical records of 15 patients who were diagnosed with an intraparotid FNS were retrospectively analyzed. US and CT scans provide clinicians with only limited information; gadolinium enhanced T1-weighted images from MRI provide more specific findings. Tumors could be removed successfully with surgical exploration, preserving facial nerve function at the same time. Gadolinium-enhanced T1-weighted MRI showed more characteristic findings for the diagnosis of intraparotid FNS. Intraparotid FNS without facial palsy can be diagnosed with MRI preoperatively, and surgical exploration is a suitable treatment modality which can remove the tumor and preserve facial nerve function. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  16. Biometrics: A Look at Facial Recognition

    DTIC Science & Technology

    a facial recognition system in the city’s Oceanfront tourist area. The system has been tested and has recently been fully implemented. Senator...Kenneth W. Stolle, the Chairman of the Virginia State Crime Commission, established a Facial Recognition Technology Sub-Committee to examine the issue of... facial recognition technology. This briefing begins by defining biometrics and discussing examples of the technology. It then explains how biometrics

  17. Facial trauma: general principles of management.

    PubMed

    Hollier, Larry H; Sharabi, Safa E; Koshy, John C; Stal, Samuel

    2010-07-01

    Facial fractures are common problems encountered by the plastic surgeon. Although ubiquitous in nature, their optimal treatment requires precise knowledge of the most recent evidence-based and technologically advanced recommendations. This article discusses a variety of contemporary issues regarding facial fractures, including physical and radiologic diagnosis, treatment pearls and caveats, and the role of various synthetic materials and plating technologies for optimal facial fracture fixation.

  18. Agency and facial emotion judgment in context.

    PubMed

    Ito, Kenichi; Masuda, Takahiko; Li, Liman Man Wai

    2013-06-01

    Past research showed that East Asians' belief in holism was expressed as their tendencies to include background facial emotions into the evaluation of target faces more than North Americans. However, this pattern can be interpreted as North Americans' tendency to downplay background facial emotions due to their conceptualization of facial emotion as volitional expression of internal states. Examining this alternative explanation, we investigated whether different types of contextual information produce varying degrees of effect on one's face evaluation across cultures. In three studies, European Canadians and East Asians rated the intensity of target facial emotions surrounded with either affectively salient landscape sceneries or background facial emotions. The results showed that, although affectively salient landscapes influenced the judgment of both cultural groups, only European Canadians downplayed the background facial emotions. The role of agency as differently conceptualized across cultures and multilayered systems of cultural meanings are discussed.

  19. WITHDRAWN: Resorbable versus titanium plates for facial fractures.

    PubMed

    Dorri, Mojtaba; Oliver, Richard

    2018-05-23

    Rigid internal fixation of the jaw bones is a routine procedure for the management of facial fractures. Titanium plates and screws are routinely used for this purpose. The limitations of this system has led to the development of plates manufactured from bioresorbable materials which, in some cases, omits the necessity for the second surgery. However, concerns remain about the stability of fixation and the length of time required for their degradation and the possibility of foreign body reactions. To compare the effectiveness of bioresorbable fixation systems with titanium systems for the management of facial fractures. We searched the following databases: The Cochrane Oral Health Group's Trials Register (to 20th August 2008), the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2008, Issue 3), MEDLINE (1950 to 20th August 2008), EMBASE (from 1980 to 20th August 2008), http://www.clinicaltrials.gov/ and http://www.controlled-trials.com (to 20th August 2008). Randomised controlled trials comparing resorbable versus titanium fixation systems used for facial fractures. Retrieved studies were independently screened by two review authors. Results were to be expressed as random-effects models using mean differences for continuous outcomes and risk ratios for dichotomous outcomes with 95% confidence intervals. Heterogeneity was to be investigated including both clinical and methodological factors. The search strategy retrieved 53 potentially eligible studies. None of the retrieved studies met our inclusion criteria and all were excluded from this review. One study is awaiting classification as we failed to obtain the full text copy. Three ongoing trials were retrieved, two of which were stopped before recruiting the planned number of participants. In one study, the excess complications in the resorbable arm was declared as the reason for stopping the trial. This review illustrates that there are no published randomised controlled clinical

  20. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  1. Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven

    2016-08-01

    Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).

  2. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  3. Automated reuseable components system study results

    NASA Technical Reports Server (NTRS)

    Gilroy, Kathy

    1989-01-01

    The Automated Reusable Components System (ARCS) was developed under a Phase 1 Small Business Innovative Research (SBIR) contract for the U.S. Army CECOM. The objectives of the ARCS program were: (1) to investigate issues associated with automated reuse of software components, identify alternative approaches, and select promising technologies, and (2) to develop tools that support component classification and retrieval. The approach followed was to research emerging techniques and experimental applications associated with reusable software libraries, to investigate the more mature information retrieval technologies for applicability, and to investigate the applicability of specialized technologies to improve the effectiveness of a reusable component library. Various classification schemes and retrieval techniques were identified and evaluated for potential application in an automated library system for reusable components. Strategies for library organization and management, component submittal and storage, and component search and retrieval were developed. A prototype ARCS was built to demonstrate the feasibility of automating the reuse process. The prototype was created using a subset of the classification and retrieval techniques that were investigated. The demonstration system was exercised and evaluated using reusable Ada components selected from the public domain. A requirements specification for a production-quality ARCS was also developed.

  4. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion. Copyright © 2014 the authors 0270-6474/14/3412457-13$15.00/0.

  5. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  6. Amblyopia Associated with Congenital Facial Nerve Paralysis.

    PubMed

    Iwamura, Hitoshi; Kondo, Kenji; Sawamura, Hiromasa; Baba, Shintaro; Yasuhara, Kazuo; Yamasoba, Tatsuya

    2016-01-01

    The association between congenital facial paralysis and visual development has not been thoroughly studied. Of 27 pediatric cases of congenital facial paralysis, we identified 3 patients who developed amblyopia, a visual acuity decrease caused by abnormal visual development, as comorbidity. These 3 patients had facial paralysis in the periocular region and developed amblyopia on the paralyzed side. They started treatment by wearing an eye patch immediately after diagnosis and before the critical visual developmental period; all patients responded to the treatment. Our findings suggest that the incidence of amblyopia in the cases of congenital facial paralysis, particularly the paralysis in the periocular region, is higher than that in the general pediatric population. Interestingly, 2 of the 3 patients developed anisometropic amblyopia due to the hyperopia of the affected eye, implying that the periocular facial paralysis may have affected the refraction of the eye through yet unspecified mechanisms. Therefore, the physicians who manage facial paralysis should keep this pathology in mind, and when they see pediatric patients with congenital facial paralysis involving the periocular region, they should consult an ophthalmologist as soon as possible. © 2016 S. Karger AG, Basel.

  7. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  9. Selecting reusable components using algebraic specifications

    NASA Technical Reports Server (NTRS)

    Eichmann, David A.

    1992-01-01

    A significant hurdle confronts the software reuser attempting to select candidate components from a software repository - discriminating between those components without resorting to inspection of the implementation(s). We outline a mixed classification/axiomatic approach to this problem based upon our lattice-based faceted classification technique and Guttag and Horning's algebraic specification techniques. This approach selects candidates by natural language-derived classification, by their interfaces, using signatures, and by their behavior, using axioms. We briefly outline our problem domain and related work. Lattice-based faceted classifications are described; the reader is referred to surveys of the extensive literature for algebraic specification techniques. Behavioral support for reuse queries is presented, followed by the conclusions.

  10. Measuring facial expression of emotion.

    PubMed

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  11. Surgical Approaches to Facial Nerve Deficits

    PubMed Central

    Birgfeld, Craig; Neligan, Peter

    2011-01-01

    The facial nerve is one of the most commonly injured cranial nerves. Once injured, the effects on form, function, and psyche are profound. We review the anatomy of the facial nerve from the brain stem to its terminal branches. We also discuss the physical exam findings of facial nerve injury at various levels. Finally, we describe various reconstructive options for reanimating the face and restoring both form and function. PMID:22451822

  12. Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.

    PubMed

    Brechet, Claire

    2017-01-01

    The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.

  13. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    PubMed Central

    2011-01-01

    Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%), 97.6% (sd = 2.8%) and 90.8% (sd = 5.5%) and average specificities of: 93.6% (sd = 4.1%), 99% (sd = 2.2%) and 79.4% (sd = 9.8%) in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease) groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information) as control specific, case specific and not differentially expressed (neutral). The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes) to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method as disease specific

  14. A greater decline in female facial attractiveness during middle age reflects women's loss of reproductive value.

    PubMed

    Maestripieri, Dario; Klimczuk, Amanda C E; Traficonte, Daniel M; Wilson, M Claire

    2014-01-01

    Facial attractiveness represents an important component of an individual's overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35-50 years and 51-65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources.

  15. Comparing Efficacy and Costs of Four Facial Fillers in Human Immunodeficiency Virus-Associated Lipodystrophy: A Clinical Trial.

    PubMed

    Vallejo, Alfonso; Garcia-Ruano, Angela A; Pinilla, Carmen; Castellano, Michele; Deleyto, Esther; Perez-Cano, Rosa

    2018-03-01

    The objective of this study was to evaluate and compare the safety and effectiveness of four different dermal fillers in the treatment of facial lipoatrophy secondary to human immunodeficiency virus. The authors conducted a clinical trial including 147 patients suffering from human immunodeficiency virus-induced lipoatrophy treated with Sculptra (poly-L-lactic acid), Radiesse (calcium hydroxylapatite), Aquamid (polyacrylamide), or autologous fat. Objective and subjective changes were evaluated during a 24-month follow-up. Number of sessions, total volume injected, and overall costs of treatment were also analyzed. A comparative cost-effectiveness analysis of the treatment options was performed. Objective improvement in facial lipoatrophy, assessed by the surgeon in terms of changes from baseline using the published classification of Fontdevila, was reported in 53 percent of the cases. Patient self-evaluation showed a general improvement after the use of facial fillers. Patients reported being satisfied with the treatment and with the reduced impact of lipodystrophy on their quality of life. Despite the nonsignificant differences observed in the number of sessions and volume, autologous fat showed significantly lower costs than all synthetic fillers (p < 0.05). Surgical treatment of human immunodeficiency virus-associated facial lipoatrophy using dermal fillers is a safe and effective procedure that improves the aesthetic appearance and the quality of life of patients. Permanent fillers and autologous fat achieve the most consistent results over time, with lipofilling being the most cost-effective procedure.

  16. Automatic classification of retinal three-dimensional optical coherence tomography images using principal component analysis network with composite kernels.

    PubMed

    Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein

    2017-11-01

    We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. A novel BCI based on ERP components sensitive to configural processing of human faces.

    PubMed

    Zhang, Yu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min(-1) using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  18. A novel BCI based on ERP components sensitive to configural processing of human faces

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  19. Facial Feedback Mechanisms in Autistic Spectrum Disorders

    PubMed Central

    van den Heuvel, Claudia; Smeets, Raymond C.

    2008-01-01

    Facial feedback mechanisms of adolescents with Autistic Spectrum Disorders (ASD) were investigated utilizing three studies. Facial expressions, which became activated via automatic (Studies 1 and 2) or intentional (Study 2) mimicry, or via holding a pen between the teeth (Study 3), influenced corresponding emotions for controls, while individuals with ASD remained emotionally unaffected. Thus, individuals with ASD do not experience feedback from activated facial expressions as controls do. This facial feedback-impairment enhances our understanding of the social and emotional lives of individuals with ASD. PMID:18293075

  20. Facial Morphogenesis of the Earliest Europeans

    PubMed Central

    Lacruz, Rodrigo S.; de Castro, José María Bermúdez; Martinón-Torres, María; O’Higgins, Paul; Paine, Michael L.; Carbonell, Eudald; Arsuaga, Juan Luis; Bromage, Timothy G.

    2013-01-01

    The modern human face differs from that of our early ancestors in that the facial profile is relatively retracted (orthognathic). This change in facial profile is associated with a characteristic spatial distribution of bone deposition and resorption: growth remodeling. For humans, surface resorption commonly dominates on anteriorly-facing areas of the subnasal region of the maxilla and mandible during development. We mapped the distribution of facial growth remodeling activities on the 900–800 ky maxilla ATD6-69 assigned to H. antecessor, and on the 1.5 My cranium KNM-WT 15000, part of an associated skeleton assigned to African H. erectus. We show that, as in H. sapiens, H. antecessor shows bone resorption over most of the subnasal region. This pattern contrasts with that seen in KNM-WT 15000 where evidence of bone deposition, not resorption, was identified. KNM-WT 15000 is similar to Australopithecus and the extant African apes in this localized area of bone deposition. These new data point to diversity of patterns of facial growth in fossil Homo. The similarities in facial growth in H. antecessor and H. sapiens suggest that one key developmental change responsible for the characteristic facial morphology of modern humans can be traced back at least to H. antecessor. PMID:23762314

  1. Peripheral facial weakness (Bell's palsy).

    PubMed

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.

  2. Comparison of Direct Side-to-End and End-to-End Hypoglossal-Facial Anastomosis for Facial Nerve Repair.

    PubMed

    Samii, Madjid; Alimohamadi, Maysam; Khouzani, Reza Karimi; Rashid, Masoud Rafizadeh; Gerganov, Venelin

    2015-08-01

    The hypoglossal facial anastomosis (HFA) is the gold standard for facial reanimation in patients with severe facial nerve palsy. The major drawbacks of the classic HFA technique are lingual morbidities due to hypoglossal nerve transection. The side-to-end HFA is a modification of the classic technique with fewer tongue-related morbidities. In this study we compared the outcome of the classic end-to-end and the direct side-to-end HFA surgeries performed at our center in regards to the facial reanimation success rate and tongue-related morbidities. Twenty-six successive cases of HFA were enrolled. In 9 of them end-to-end anastomoses were performed, and 17 had direct side-to-end anastomoses. The House-Brackmann (HB) and Pitty and Tator (PT) scales were used to document surgical outcome. The hemiglossal atrophy, swallowing, and hypoglossal nerve function were assessed at follow-up. The original pathology was vestibular schwannoma in 15, meningioma in 4, brain stem glioma in 4, and other pathologies in 3. The mean interval between facial palsy and HFA was 18 months (range: 0-60). The median follow-up period was 20 months. The PT grade at follow-up was worse in patients with a longer interval from facial palsy and HFA (P value: 0.041). The lesion type was the only other factor that affected PT grade (the best results in vestibular schwannoma and the worst in the other pathologies group, P value: 0.038). The recovery period for facial tonicity was longer in patients with radiation therapy before HFA (13.5 vs. 8.5 months) and those with a longer than 2-year interval from facial palsy to HFA (13.5 vs. 8.5 months). Although no significant difference between the side-to-end and the end-to-end groups was seen in terms of facial nerve functional recovery, patients from the side-to-end group had a significantly lower rate of lingual morbidities (tongue hemiatrophy: 100% vs. 5.8%, swallowing difficulty: 55% vs. 11.7%, speech disorder 33% vs. 0%). With the side-to-end HFA

  3. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  4. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  5. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  6. 10 CFR 1045.17 - Classification levels.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... classification include detailed technical descriptions of critical features of a nuclear explosive design that... classification include designs for specific weapon components (not revealing critical features), key features of uranium enrichment technologies, or specifications of weapon materials. (3) Confidential. The Director of...

  7. Effect of neuromuscular electrical stimulation on facial muscle strength and oral function in stroke patients with facial palsy

    PubMed Central

    Choi, Jong-Bae

    2016-01-01

    [Purpose] The aim of this study was to investigate the effect of neuromuscular electrical stimulation on facial muscle strength and oral function in stroke patients with facial palsy. [Subjects and Methods] Nine subjects received the electrical stimulation and traditional dysphagia therapy. Electrical stimulation was applied to stimulate each subject’s facial muscles 30 minutes a day, 5 days a week, for 4 weeks. [Results] Subjects showed significant improvement in cheek and lip strength and oral function after the intervention. [Conclusion] This study demonstrates that electrical stimulation improves facial muscle strength and oral function in stroke patients with dysphagia. PMID:27799689

  8. Cerebellum and processing of negative facial emotions: cerebellar transcranial DC stimulation specifically enhances the emotional recognition of facial anger and sadness.

    PubMed

    Ferrucci, Roberta; Giannicola, Gaia; Rosa, Manuela; Fumagalli, Manuela; Boggio, Paulo Sergio; Hallett, Mark; Zago, Stefano; Priori, Alberto

    2012-01-01

    Some evidence suggests that the cerebellum participates in the complex network processing emotional facial expression. To evaluate the role of the cerebellum in recognising facial expressions we delivered transcranial direct current stimulation (tDCS) over the cerebellum and prefrontal cortex. A facial emotion recognition task was administered to 21 healthy subjects before and after cerebellar tDCS; we also tested subjects with a visual attention task and a visual analogue scale (VAS) for mood. Anodal and cathodal cerebellar tDCS both significantly enhanced sensory processing in response to negative facial expressions (anodal tDCS, p=.0021; cathodal tDCS, p=.018), but left positive emotion and neutral facial expressions unchanged (p>.05). tDCS over the right prefrontal cortex left facial expressions of both negative and positive emotion unchanged. These findings suggest that the cerebellum is specifically involved in processing facial expressions of negative emotion.

  9. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy*

    PubMed Central

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-01-01

    BACKGROUND: Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). METHODS: In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. RESULTS: The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p < 0.001). In Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p < 0.001). Spearman's correlation coefficient between different values in the two methods was 0.66 (p < 0.001). CONCLUSIONS: Evaluating the facial nerve palsy using Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead. PMID:22973325

  10. Human homogamy in facial characteristics: does a sexual-imprinting-like mechanism play a role?

    PubMed

    Nojo, Saori; Tamura, Satoshi; Ihara, Yasuo

    2012-09-01

    Human homogamy may be caused in part by individuals' preference for phenotypic similarities. Two types of preference can result in homogamy: individuals may prefer someone who is similar to themselves (self-referent phenotype matching) or to their parents (a sexual-imprinting-like mechanism). In order to examine these possibilities, we compare faces of couples and their family members in two ways. First, "perceived" similarity between a pair of faces is quantified as similarity ratings given to the pair. Second, "physical" similarity between two groups of faces is evaluated on the basis of correlations in principal component scores generated from facial measurements. Our results demonstrate a tendency to homogamy in facial characteristics and suggest that the tendency is due primarily to self-referent phenotype matching. Nevertheless, the presence of a sexual-imprinting-like effect is also partially indicated: whether individuals are involved in facial homogamy may be affected by their relationship with their parents during childhood.

  11. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  12. Facial morphology and children's categorization of facial expressions of emotions: a comparison between Asian and Caucasian faces.

    PubMed

    Gosselin, P; Larocque, C

    2000-09-01

    The effects of Asian and Caucasian facial morphology were examined by having Canadian children categorize pictures of facial expressions of basic emotions. The pictures were selected from the Japanese and Caucasian Facial Expressions of Emotion set developed by D. Matsumoto and P. Ekman (1989). Sixty children between the ages of 5 and 10 years were presented with short stories and an array of facial expressions, and were asked to point to the expression that best depicted the specific emotion experienced by the characters. The results indicated that expressions of fear and surprise were better categorized from Asian faces, whereas expressions of disgust were better categorized from Caucasian faces. These differences originated in some specific confusions between expressions.

  13. Influence of facial convexity on facial attractiveness in Japanese.

    PubMed

    Ioi, H; Nakata, S; Nakasima, A; Counts, Al

    2007-11-01

    The purpose of this study was to assess and determine the range of the top three most-favored facial profiles for each sex from a series of varying facial convexity, and to evaluate the clinically acceptable facial profiles for Japanese adults. Questionnaire-based study. Silhouettes of average male and female profiles were constructed from the profiles of 30 Japanese males and females with normal occlusions. Chin positions were protruded or retruded by 2 degrees , 4 degrees , 6 degrees , 8 degrees and 10 degrees , respectively, from the average profile. Forty-one orthodontists and 50 dental students were asked to select the three most-favored profiles for each sex, and they were also asked to indicate whether they would seek surgical orthodontic treatment if that image represented their own profile. For males, both the orthodontists and dental students chose the average profile as the most-favored profile. For females, both the orthodontists and dental students chose a slightly more retruded chin position as the most-favored profile. Japanese raters tended to choose class II profiles as more acceptable profiles than class III profiles for both males and females. These findings suggest that Japanese patients with class III profiles tend to seek surgical orthodontic treatment more often.

  14. The fractal characteristic of facial anthropometric data for developing PCA fit test panels for youth born in central China.

    PubMed

    Yang, Lei; Wei, Ran; Shen, Henggen

    2017-01-01

    New principal component analysis (PCA) respirator fit test panels had been developed for current American and Chinese civilian workers based on anthropometric surveys. The PCA panels used the first two principal components (PCs) obtained from a set of 10 facial dimensions. Although the PCA panels for American and Chinese subjects adopted the bivairate framework with two PCs, the number of the PCs retained in the PCA analysis was different between Chinese subjects and Americans. For the Chinese youth group, the third PC should be retained in the PCA analysis for developing new fit test panels. In this article, an additional number label (ANL) is used to explain the third PC in PCA analysis when the first two PCs are used to construct the PCA half-facepiece respirator fit test panel for Chinese group. The three-dimensional box-counting method is proposed to estimate the ANLs by calculating fractal dimensions of the facial anthropometric data of the Chinese youth. The linear regression coefficients of scale-free range R 2 are all over 0.960, which demonstrates that the facial anthropometric data of the Chinese youth has fractal characteristic. The youth subjects born in Henan province has an ANL of 2.002, which is lower than the composite facial anthropometric data of Chinese subjects born in many provinces. Hence, Henan youth subjects have the self-similar facial anthropometric characteristic and should use the particular ANL (2.002) as the important tool along with using the PCA panel. The ANL method proposed in this article not only provides a new methodology in quantifying the characteristics of facial anthropometric dimensions for any ethnic/racial group, but also extends the scope of PCA panel studies to higher dimensions.

  15. Facial paralysis due to an occult parotid abscess.

    PubMed

    Orhan, Kadir Serkan; Demirel, Tayfun; Kocasoy-Orhan, Elif; Yenigül, Kubilay

    2008-01-01

    Facial paralysis associated with benign diseases of the parotid gland is very rare. It has been reported in approximately 16 cases of acute suppurative parotitis or parotid abscess. We presented a 45-year-old woman who developed facial paralysis secondary to an occult parotid abscess. Initially, there was no facial paralysis and the signs and symptoms were suggestive of acute parotitis, for which medical treatment was initiated. Three days later, left-sided facial palsy of HB (House-Brackmann) grade 5 developed. Ultrasonography revealed a pretragal, hypoechoic mass, 10x8 mm in size, causing inflammation in the surrounding tissue. Fine needle aspiration biopsy obtained from the mass revealed polymorphonuclear leukocytes and lymphocytes. No malignant cells were observed. The lesion was diagnosed as an occult parotid abscess. After a week, the mass disappeared and facial paralysis improved to HB grade 4. At the end of the first month, facial paralysis improved to HB grade 1. At three months, facial nerve function was nearly normal.

  16. Electrical and transcranial magnetic stimulation of the facial nerve: diagnostic relevance in acute isolated facial nerve palsy.

    PubMed

    Happe, Svenja; Bunten, Sabine

    2012-01-01

    Unilateral facial weakness is common. Transcranial magnetic stimulation (TMS) allows identification of a conduction failure at the level of the canalicular portion of the facial nerve and may help to confirm the diagnosis. We retrospectively analyzed 216 patients with the diagnosis of peripheral facial palsy. The electrophysiological investigations included the blink reflex, preauricular electrical stimulation and the response to TMS at the labyrinthine part of the canalicular proportion of the facial nerve within 3 days after symptom onset. A similar reduction or loss of the TMS amplitude (p < 0.005) of the affected side was seen in each patient group. Of the 216 patients (107 female, mean age 49.7 ± 18.0 years), 193 were diagnosed with Bell's palsy. Test results of the remaining patients led to the diagnosis of infectious [including herpes simplex, varicella zoster infection and borreliosis (n = 13)] and noninfectious [including diabetes and neoplasma (n = 10)] etiology. A conduction block in TMS supports the diagnosis of peripheral facial palsy without being specific for Bell's palsy. These data shed light on the TMS-based diagnosis of peripheral facial palsy, an ability to localize the site of lesion within the Fallopian channel regardless of the underlying pathology. Copyright © 2012 S. Karger AG, Basel.

  17. Reverse Correlating Love: Highly Passionate Women Idealize Their Partner’s Facial Appearance

    PubMed Central

    Gunaydin, Gul; DeLong, Jordan E.

    2015-01-01

    A defining feature of passionate love is idealization—evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner’s facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships. PMID:25806540

  18. Small vestibular schwannomas presenting with facial nerve palsy.

    PubMed

    Espahbodi, Mana; Carlson, Matthew L; Fang, Te-Yung; Thompson, Reid C; Haynes, David S

    2014-06-01

    To describe the surgical management and convalescence of two patients presenting with severe facial nerve weakness associated with small intracanalicular vestibular schwannomas (VS). Retrospective review. Two adult female patients presenting with audiovestibular symptoms and subacute facial nerve paralysis (House-Brackmann Grade IV and V). In both cases, post-contrast T1-weighted magnetic resonance imaging revealed an enhancing lesion within the internal auditory canal without lateral extension beyond the fundus. Translabyrinthine exploration demonstrated vestibular nerve origin of tumor, extrinsic to the facial nerve, and frozen section pathology confirmed schwannoma. Gross total tumor resection with VIIth cranial nerve preservation and decompression of the labyrinthine segment of the facial nerve was performed. Both patients recovered full motor function between 6 and 8 months after surgery. Although rare, small VS may cause severe facial neuropathy, mimicking the presentation of facial nerve schwannomas and other less common pathologies. In the absence of labyrinthine extension on MRI, surgical exploration is the only reliable means of establishing a diagnosis. In the case of confirmed VS, early gross total resection with facial nerve preservation and labyrinthine segment decompression may afford full motor recovery-an outcome that cannot be achieved with facial nerve grafting.

  19. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  20. Classification of narcotics in solid mixtures using principal component analysis and Raman spectroscopy.

    PubMed

    Ryder, Alan G

    2002-03-01

    Eighty-five solid samples consisting of illegal narcotics diluted with several different materials were analyzed by near-infrared (785 nm excitation) Raman spectroscopy. Principal Component Analysis (PCA) was employed to classify the samples according to narcotic type. The best sample discrimination was obtained by using the first derivative of the Raman spectra. Furthermore, restricting the spectral variables for PCA to 2 or 3% of the original spectral data according to the most intense peaks in the Raman spectrum of the pure narcotic resulted in a rapid discrimination method for classifying samples according to narcotic type. This method allows for the easy discrimination between cocaine, heroin, and MDMA mixtures even when the Raman spectra are complex or very similar. This approach of restricting the spectral variables also decreases the computational time by a factor of 30 (compared to the complete spectrum), making the methodology attractive for rapid automatic classification and identification of suspect materials.

  1. Facial recognition in children after perinatal stroke.

    PubMed

    Ballantyne, A O; Trauner, D A

    1999-04-01

    To examine the effects of prenatal or perinatal stroke on the facial recognition skills of children and young adults. It was hypothesized that the nature and extent of facial recognition deficits seen in patients with early-onset lesions would be different from that seen in adults with later-onset neurologic impairment. Numerous studies with normal and neurologically impaired adults have found a right-hemisphere superiority for facial recognition. In contrast, little is known about facial recognition in children after early focal brain damage. Forty subjects had single, unilateral brain lesions from pre- or perinatal strokes (20 had left-hemisphere damage, and 20 had right-hemisphere damage), and 40 subjects were controls who were individually matched to the lesion subjects on the basis of age, sex, and socioeconomic status. Each subject was given the Short-Form of Benton's Test of Facial Recognition. Data were analyzed using the Wilcoxon matched-pairs signed-rank test and multiple regression. The lesion subjects performed significantly more poorly than did matched controls. There was no clear-cut lateralization effect, with the left-hemisphere group performing significantly more poorly than matched controls and the right-hemisphere group showing a trend toward poorer performance. Parietal lobe involvement, regardless of lesion side, adversely affected facial recognition performance in the lesion group. Results could not be accounted for by IQ differences between lesion and control groups, nor was lesion severity systematically related to facial recognition performance. Pre- or perinatal unilateral brain damage results in a subtle disturbance in facial recognition ability, independent of the side of the lesion. Parietal lobe involvement, in particular, has an adverse effect on facial recognition skills. These findings suggest that the parietal lobes may be involved in the acquisition of facial recognition ability from a very early point in brain development, but

  2. Fixation to features and neural processing of facial expressions in a gender discrimination task

    PubMed Central

    Neath, Karly N.; Itier, Roxane J.

    2017-01-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (~120 ms) for happy faces was seen at occipital sites and was sustained until ~350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ~150 ms until ~300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. PMID:26277653

  3. Effects of noninvasive facial nerve stimulation in the dog middle cerebral artery occlusion model of ischemic stroke.

    PubMed

    Borsody, Mark K; Yamada, Chisa; Bielawski, Dawn; Heaton, Tamara; Castro Prado, Fernando; Garcia, Andrea; Azpiroz, Joaquín; Sacristan, Emilio

    2014-04-01

    Facial nerve stimulation has been proposed as a new treatment of ischemic stroke because autonomic components of the nerve dilate cerebral arteries and increase cerebral blood flow when activated. A noninvasive facial nerve stimulator device based on pulsed magnetic stimulation was tested in a dog middle cerebral artery occlusion model. We used an ischemic stroke dog model involving injection of autologous blood clot into the internal carotid artery that reliably embolizes to the middle cerebral artery. Thirty minutes after middle cerebral artery occlusion, the geniculate ganglion region of the facial nerve was stimulated for 5 minutes. Brain perfusion was measured using gadolinium-enhanced contrast MRI, and ATP and total phosphate levels were measured using 31P spectroscopy. Separately, a dog model of brain hemorrhage involving puncture of the intracranial internal carotid artery served as an initial examination of facial nerve stimulation safety. Facial nerve stimulation caused a significant improvement in perfusion in the hemisphere affected by ischemic stroke and a reduction in ischemic core volume in comparison to sham stimulation control. The ATP/total phosphate ratio showed a large decrease poststroke in the control group versus a normal level in the stimulation group. The same stimulation administered to dogs with brain hemorrhage did not cause hematoma enlargement. These results support the development and evaluation of a noninvasive facial nerve stimulator device as a treatment of ischemic stroke.

  4. Association study of Demodex bacteria and facial dermatoses based on DGGE technique.

    PubMed

    Zhao, YaE; Yang, Fan; Wang, RuiLing; Niu, DongLing; Mu, Xin; Yang, Rui; Hu, Li

    2017-03-01

    The role of bacteria is unclear in the facial skin lesions caused by Demodex. To shed some light on this issue, we conducted a case-control study comparing cases with facial dermatoses with controls with healthy skin using denaturing gradient gel electrophoresis (DGGE) technique. The bacterial diversity, composition, and principal component were analyzed for Demodex bacteria and the matched facial skin bacteria. The result of mite examination showed that all 33 cases were infected with Demodex folliculorum (D. f), whereas 16 out of the 30 controls were infected with D. f, and the remaining 14 controls were infected with Demodex brevis (D. b). The diversity analysis showed that only evenness index presented statistical difference between mite bacteria and matched skin bacteria in the cases. The composition analysis showed that the DGGE bands of cases and controls were assigned to 12 taxa of 4 phyla, including Proteobacteria (39.37-52.78%), Firmicutes (2.7-26.77%), Actinobacteria (0-5.71%), and Bacteroidetes (0-2.08%). In cases, the proportion of Staphylococcus in Firmicutes was significantly higher than that in D. f controls and D. b controls, while the proportion of Sphingomonas in Proteobacteria was significantly lower than that in D. f controls. The between-group analysis (BGA) showed that all the banding patterns clustered into three groups, namely, D. f cases, D. f controls, and D. b controls. Our study suggests that the bacteria in Demodex should come from the matched facial skin bacteria. Proteobacteria and Firmicutes are the two main taxa. The increase of Staphylococcus and decrease of Sphingomonas might be associated with the development of facial dermatoses.

  5. Facial palsy after dental procedures - Is viral reactivation responsible?

    PubMed

    Gaudin, Robert A; Remenschneider, Aaron K; Phillips, Katie; Knipfer, Christian; Smeets, Ralf; Heiland, Max; Hadlock, Tessa A

    2017-01-01

    Herpes labialis viral reactivation has been reported following dental procedures, but the incidence, characteristics and outcomes of delayed peripheral facial nerve palsy following dental work is poorly understood. Herein we describe the unique features of delayed facial paresis following dental procedures. An institutional retrospective review was performed to identify patients diagnosed with delayed facial nerve palsy within 30 days of dental manipulation. Demographics, prodromal signs and symptoms, initial medical treatment and outcomes were assessed. Of 2471 patients with facial palsy, 16 (0.7%) had delayed facial paresis following ipsilateral dental procedures. Average age at presentation was 44 yrs and 56% (9/16) were female. Clinical evaluation was consistent with Bell's palsy in 14 (88%) and Ramsay-Hunt syndrome in 2 patients (12%). Patients developed facial paresis an average of 3.9 days after the dental procedure, with all individuals developing a flaccid paralysis (House Brackmann (HB) grade VI) during the acute stage. 50% of patients developed persistent facial palsy in the form of non-flaccid facial paralysis (HBIII-IV). Facial palsy, like herpes labialis, can occur in the days following dental procedures and may also be related to viral reactivation. In this small cohort, long-term facial outcomes appear worse than for spontaneous Bell's palsy. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Regional facial asymmetries and attractiveness of the face.

    PubMed

    Kaipainen, Anu E; Sieber, Kevin R; Nada, Rania M; Maal, Thomas J; Katsaros, Christos; Fudalej, Piotr S

    2016-12-01

    Facial attractiveness is an important factor in our social interactions. It is still not entirely clear which factors influence the attractiveness of a face and facial asymmetry appears to play a certain role. The aim of the present study was to assess the association between facial attractiveness and regional facial asymmetries evaluated on three-dimensional (3D) images. 3D facial images of 59 (23 male, 36 female) young adult patients (age 16-25 years) before orthodontic treatment were evaluated for asymmetry. The same 3D images were presented to 12 lay judges who rated the attractiveness of each subject on a 100mm visual analogue scale. Reliability of the method was assessed with Bland-Altman plots and Cronbach's alpha coefficient. All subjects showed a certain amount of asymmetry in all regions of the face; most asymmetry was found in the chin and cheek areas and less in the lip, nose and forehead areas. No statistically significant differences in regional facial asymmetries were found between male and female subjects (P > 0.05). Regression analyses demonstrated that the judgement of facial attractiveness was not influenced by absolute regional facial asymmetries when gender, facial width-to-height ratio and type of malocclusion were controlled (P > 0.05). A potential limitation of the study could be that other biologic and cultural factors influencing the perception of facial attractiveness were not controlled for. A small amount of asymmetry was present in all subjects assessed in this study, and asymmetry of this magnitude may not influence the assessment of facial attractiveness. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. Facial bacterial infections: folliculitis.

    PubMed

    Laureano, Ana Cristina; Schwartz, Robert A; Cohen, Philip J

    2014-01-01

    Facial bacterial infections are most commonly caused by infections of the hair follicles. Wherever pilosebaceous units are found folliculitis can occur, with the most frequent bacterial culprit being Staphylococcus aureus. We review different origins of facial folliculitis, distinguishing bacterial forms from other infectious and non-infectious mimickers. We distinguish folliculitis from pseudofolliculitis and perifolliculitis. Clinical features, etiology, pathology, and management options are also discussed. Copyright © 2014. Published by Elsevier Inc.

  8. Combat-related facial burns: analysis of strategic pitfalls.

    PubMed

    Johnson, Benjamin W; Madson, Andrew Q; Bong-Thakur, Sarah; Tucker, David; Hale, Robert G; Chan, Rodney K

    2015-01-01

    Burns constitute approximately 10% of all combat-related injuries to the head and neck region. We postulated that the combat environment presents unique challenges not commonly encountered among civilian injuries. The purpose of the present study was to determine the features commonly seen among combat facial burns that will result in therapeutic challenges and might contribute to undesired outcomes. The present study was a retrospective study performed using a query of the Burn Registry at the US Army Institute of Surgical Research Burn Center for all active duty facial burn admissions from October 2001 to February 2011. The demographic data, total body surface area of the burn, facial region body surface area involvement, and dates of injury, first operation, and first facial operation were tabulated and compared. A subset analysis of severe facial burns, defined by a greater than 7% facial region body surface area, was performed with a thorough medical record review to determine the presence of associated injuries. Of all the military burn injuries, 67.1% (n = 558) involved the face. Of these, 81.3% (n = 454) were combat related. The combat facial burns had a mean total body surface area of 21.4% and a mean facial region body surface area of 3.2%. The interval from the date of the injury to the first operative encounter was 6.6 ± 0.8 days and was 19.8 ± 2.0 days to the first facial operation. A subset analysis of the severe facial burns revealed that the first facial operation and the definitive coverage operation was performed at 13.45 ± 2.6 days and 31.9 ± 4.1 days after the injury, respectively. The mortality rate for this subset of patients was 32% (n = 10), with a high rate of associated inhalational injuries (61%, n = 19), limb amputations (29%, n = 9), and facial allograft usage (48%, n = 15) and a mean facial autograft thickness of 10.5/1,000th in. Combat-related facial burns present multiple challenges, which can contribute to suboptimal long

  9. Identification of facial shape by applying golden ratio to the facial measurements: an interracial study in malaysian population.

    PubMed

    Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas

    2012-12-01

    The "golden ratio" is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures.

  10. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  11. A small-world network model of facial emotion recognition.

    PubMed

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  12. Iatrogenic facial palsy: the cost.

    PubMed

    Pulec, J L

    1996-11-01

    The cost of iatrogenic facial paralysis can be high. Ways to avoid facial nerve injury during surgery and, should it occur, ways to minimize the disability and cost are discussed. These include adequate preparation and training by the surgeon, the exercise of sound judgment, the presence of high morals by the surgeon, adequate preoperative diagnosis and surgical instrumentation and thorough preoperative oral and written informed consent. Should facial nerve injury occur, immediate consultation and reparative decompression, anastomosis or grafting should be performed to obtain the best ultimate result. The value of prompt, competent, sympathetic and continuing concern offered by the surgeon to the patient cannot be over emphasized.

  13. [Presurgical orthodontics for facial asymmetry].

    PubMed

    Labarrère, H

    2003-03-01

    As with the treatment of all facial deformities, orthodontic pre-surgical preparation for facial asymmetry should aim at correcting severe occlusal discrepancies not solely on the basis of a narrow occlusal analysis but also in a way that will not disturb the proposed surgical protocol. In addition, facial asymmetries require specific adjustments, difficult to derive and to apply because of their inherent atypical morphological orientation of both alveolar and basal bony support. Three treated cases illustrate different solutions to problems posed by pathological torque: this torque must be considered with respect to proposed surgical changes, within the framework of their limitations and their possible contra-indications.

  14. Facial measurements for frame design.

    PubMed

    Tang, C Y; Tang, N; Stewart, M C

    1998-04-01

    Anthropometric data for the purpose of spectacle frame design are scarce in the literature. Definitions of facial features to be measured with existing systems of facial measurement are often not specific enough for frame design and manufacturing. Currently, for individual frame design, experienced personnel collect data with facial rules or instruments. A new measuring system is proposed, making use of a template in the form of a spectacle frame. Upon fitting the template onto a subject, most of the measuring references can be defined. Such a system can be administered by lesser-trained personnel and can be used for researches covering a larger population.

  15. Pattern of facial palsy in a typical Nigerian specialist hospital.

    PubMed

    Lamina, S; Hanif, S

    2012-12-01

    Data on incidence of facial palsy is generally lacking in Nigeria. To assess six years' incidence of facial palsy in Murtala Muhammed Specialist Hospital (MMSH), Kano, Nigeria. The records of patients diagnosed as facial problems between January 2000 and December 2005 were scrutinized. Data on diagnosis, age, sex, side affected, occupation and causes were obtained. A total number of 698 patients with facial problems were recorded. Five hundred and ninety four (85%) were diagnosed as facial palsy. Out of the diagnosed facial palsy, males (56.2%) had a higher incidence than females; 20-34 years age group (40.3%) had a greater prevalence; the commonest cause of facial palsy was found out to be Idiopathic (39.1%) and was most common among business men (31.6%). Right sided facial palsy (52.2%) was predominant. Incidence of facial palsy was highest in 2003 (25.3%) and decreased from 2004. It was concluded that the incidence of facial palsy was high and Bell's palsy remains the most common causes of facial (nerve) paralysis.

  16. Facial Palsy Following Embolization of a Juvenile Nasopharyngeal Angiofibroma.

    PubMed

    Tawfik, Kareem O; Harmon, Jeffrey J; Walters, Zoe; Samy, Ravi; de Alarcon, Alessandro; Stevens, Shawn M; Abruzzo, Todd

    2018-05-01

    To describe a case of the rare complication of facial palsy following preoperative embolization of a juvenile nasopharyngeal angiofibroma (JNA). To illustrate the vascular supply to the facial nerve and as a result, highlight the etiology of the facial nerve palsy. The angiography and magnetic resonance (MR) imaging of a case of facial palsy following preoperative embolization of a JNA is reviewed. A 13-year-old male developed left-sided facial palsy following preoperative embolization of a left-sided JNA. Evaluation of MR imaging studies and retrospective review of the angiographic data suggested errant embolization of particles into the petrosquamosal branch of the middle meningeal artery (MMA), a branch of the internal maxillary artery (IMA), through collateral vasculature. The petrosquamosal branch of the MMA is the predominant blood supply to the facial nerve in the facial canal. The facial palsy resolved since complete infarction of the nerve was likely prevented by collateral blood supply from the stylomastoid artery. Facial palsy is a potential complication of embolization of the IMA, a branch of the external carotid artery (ECA). This is secondary to ischemia of the facial nerve due to embolization of its vascular supply. Clinicians should be aware of this potential complication and counsel patients accordingly prior to embolization for JNA.

  17. Virtual transplantation in designing a facial prosthesis for extensive maxillofacial defects that cross the facial midline using computer-assisted technology.

    PubMed

    Feng, Zhi-hong; Dong, Yan; Bai, Shi-zhu; Wu, Guo-feng; Bi, Yun-peng; Wang, Bo; Zhao, Yi-min

    2010-01-01

    The aim of this article was to demonstrate a novel approach to designing facial prostheses using the transplantation concept and computer-assisted technology for extensive, large, maxillofacial defects that cross the facial midline. The three-dimensional (3D) facial surface images of a patient and his relative were reconstructed using data obtained through optical scanning. Based on these images, the corresponding portion of the relative's face was transplanted to the patient's where the defect was located, which could not be rehabilitated using mirror projection, to design the virtual facial prosthesis without the eye. A 3D model of an artificial eye that mimicked the patient's remaining one was developed, transplanted, and fit onto the virtual prosthesis. A personalized retention structure for the artificial eye was designed on the virtual facial prosthesis. The wax prosthesis was manufactured through rapid prototyping, and the definitive silicone prosthesis was completed. The size, shape, and cosmetic appearance of the prosthesis were satisfactory and matched the defect area well. The patient's facial appearance was recovered perfectly with the prosthesis, as determined through clinical evaluation. The optical 3D imaging and computer-aided design/computer-assisted manufacturing system used in this study can design and fabricate facial prostheses more precisely than conventional manual sculpturing techniques. The discomfort generally associated with such conventional methods was decreased greatly. The virtual transplantation used to design the facial prosthesis for the maxillofacial defect, which crossed the facial midline, and the development of the retention structure for the eye were both feasible.

  18. Nerve crush but not displacement-induced stretch of the intra-arachnoidal facial nerve promotes facial palsy after cerebellopontine angle surgery.

    PubMed

    Bendella, Habib; Brackmann, Derald E; Goldbrunner, Roland; Angelov, Doychin N

    2016-10-01

    Little is known about the reasons for occurrence of facial nerve palsy after removal of cerebellopontine angle tumors. Since the intra-arachnoidal portion of the facial nerve is considered to be so vulnerable that even the slightest tension or pinch may result in ruptured axons, we tested whether a graded stretch or controlled crush would affect the postoperative motor performance of the facial (vibrissal) muscle in rats. Thirty Wistar rats, divided into five groups (one with intact controls and four with facial nerve lesions), were used. Under inhalation anesthesia, the occipital squama was opened, the cerebellum gently retracted to the left, and the intra-arachnoidal segment of the right facial nerve exposed. A mechanical displacement of the brainstem with 1 or 3 mm toward the midline or an electromagnet-controlled crush of the facial nerve with a tweezers at a closure velocity of 50 and 100 mm/s was applied. On the next day, whisking motor performance was determined by video-based motion analysis. Even the larger (with 3 mm) mechanical displacement of the brainstem had no harmful effect: The amplitude of the vibrissal whisks was in the normal range of 50°-60°. On the other hand, even the light nerve crush (50 mm/s) injured the facial nerve and resulted in paralyzed vibrissal muscles (amplitude of 10°-15°). We conclude that, contrary to the generally acknowledged assumptions, it is the nerve crush but not the displacement-induced stretching of the intra-arachnoidal facial trunk that promotes facial palsy after cerebellopontine angle surgery in rats.

  19. 3D-Ultrasonography for evaluation of facial muscles in patients with chronic facial palsy or defective healing: a pilot study.

    PubMed

    Volk, Gerd Fabian; Pohlmann, Martin; Finkensieper, Mira; Chalmers, Heather J; Guntinas-Lichius, Orlando

    2014-01-01

    While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment.

  20. 3D-Ultrasonography for evaluation of facial muscles in patients with chronic facial palsy or defective healing: a pilot study

    PubMed Central

    2014-01-01

    Background While standardized methods are established to examine the pathway from motorcortex to the peripheral nerve in patients with facial palsy, a reliable method to evaluate the facial muscles in patients with long-term palsy for therapy planning is lacking. Methods A 3D ultrasonographic (US) acquisition system driven by a motorized linear mover combined with conventional US probe was used to acquire 3D data sets of several facial muscles on both sides of the face in a healthy subject and seven patients with different types of unilateral degenerative facial nerve lesions. Results The US results were correlated to the duration of palsy and the electromyography results. Consistent 3D US based volumetry through bilateral comparison was feasible for parts of the frontalis muscle, orbicularis oculi muscle, depressor anguli oris muscle, depressor labii inferioris muscle, and mentalis muscle. With the exception of the frontal muscle, the facial muscles volumes were much smaller on the palsy side (minimum: 3% for the depressor labii inferior muscle) than on the healthy side in patients with severe facial nerve lesion. In contrast, the frontal muscles did not show a side difference. In the two patients with defective healing after spontaneous regeneration a decrease in muscle volume was not seen. Synkinesis and hyperkinesis was even more correlated to muscle hypertrophy on the palsy compared with the healthy side. Conclusion 3D ultrasonography seems to be a promising tool for regional and quantitative evaluation of facial muscles in patients with facial palsy receiving a facial reconstructive surgery or conservative treatment. PMID:24782657

  1. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  2. [The history of facial paralysis].

    PubMed

    Glicenstein, J

    2015-10-01

    Facial paralysis has been a recognized condition since Antiquity, and was mentionned by Hippocratus. In the 17th century, in 1687, the Dutch physician Stalpart Van der Wiel rendered a detailed observation. It was, however, Charles Bell who, in 1821, provided the description that specified the role of the facial nerve. Facial nerve surgery began at the end of the 19th century. Three different techniques were used successively: nerve anastomosis, (XI-VII Balance 1895, XII-VII, Korte 1903), myoplasties (Lexer 1908), and suspensions (Stein 1913). Bunnell successfully accomplished the first direct facial nerve repair in the temporal bone, in 1927, and in 1932 Balance and Duel experimented with nerve grafts. Thanks to progress in microsurgical techniques, the first faciofacial anastomosis was realized in 1970 (Smith, Scaramella), and an account of the first microneurovascular muscle transfer published in 1976 by Harii. Treatment of the eyelid paralysis was at the origin of numerous operations beginning in the 1960s; including palpebral spring (Morel Fatio 1962) silicone sling (Arion 1972), upperlid loading with gold plate (Illig 1968), magnets (Muhlbauer 1973) and transfacial nerve grafts (Anderl 1973). By the end of the 20th century, surgeons had at their disposal a wide range of valid techniques for facial nerve surgery, including modernized versions of older techniques. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  3. Neurofibromatosis of the head and neck: classification and surgical management.

    PubMed

    Latham, Kerry; Buchanan, Edward P; Suver, Daniel; Gruss, Joseph S

    2015-03-01

    Neurofibromatosis is common and presents with variable penetrance and manifestations in one in 2500 to one in 3000 live births. The management of these patients is often multidisciplinary because of the complexity of the disease. Plastic surgeons are frequently involved in the surgical management of patients with head and neck involvement. A 20-year retrospective review of patients treated surgically for head and neck neurofibroma was performed. Patients were identified according to International Classification of Diseases, Ninth Revision codes for neurofibromatosis and from the senior author's database. A total of 59 patients with head and neck neurofibroma were identified. These patients were categorized into five distinct, but not exclusive, categories to assist with diagnosis and surgical management. These categories included plexiform, cranioorbital, facial, neck, and parotid/auricular neurofibromatosis. A surgical classification system and clinical characteristics of head and neck neurofibromatosis is presented to assist practitioners with diagnosis and surgical management of this complex disease. The surgical management of the cranioorbital type is discussed in detail in 24 patients. The importance and safety of facial nerve dissection and preservation using intraoperative nerve monitoring were validated in 16 dissections in 15 patients. Massive involvement of the neck extending from the skull base to the mediastinum, frequently considered inoperable, has been safely resected by the use of access osteotomies of the clavicle and sternum, muscle takedown, and brachial plexus dissection and preservation using intraoperative nerve monitoring. Therapeutic, IV.

  4. Positive facial expressions during retrieval of self-defining memories.

    PubMed

    Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad

    2017-11-14

    In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.

  5. [Changes in facial nerve function, morphology and neurotrophic factor III expression following three types of facial nerve injury].

    PubMed

    Zhang, Lili; Wang, Haibo; Fan, Zhaomin; Han, Yuechen; Xu, Lei; Zhang, Haiyan

    2011-01-01

    To study the changes in facial nerve function, morphology and neurotrophic factor III (NT-3) expression following three types of facial nerve injury. Changes in facial nerve function (in terms of blink reflex (BF), vibrissae movement (VM) and position of nasal tip) were assessed in 45 rats in response to three types of facial nerve injury: partial section of the extratemporal segment (group one), partial section of the facial canal segment (group two) and complete transection of the facial canal segment lesion (group three). All facial nerves specimen were then cut into two parts at the site of the lesion after being taken from the lesion site on 1st, 7th, 21st post-surgery-days (PSD). Changes of morphology and NT-3 expression were evaluated using the improved trichrome stain and immunohistochemistry techniques ,respectively. Changes in facial nerve function: In group 1, all animals had no blink reflex (BF) and weak vibrissae movement (VM) at the 1st PSD; The blink reflex in 80% of the rats recovered partly and the vibrissae movement in 40% of the rats returned to normal at the 7th PSD; The facial nerve function in 600 of the rats was almost normal at the 21st PSD. In group 2, all left facial nerve paralyzed at the 1st PSD; The blink reflex partly recovered in 40% of the rats and the vibrissae movement was weak in 80% of the rats at the 7th PSD; 8000 of the rats'BF were almost normal and 40% of the rats' VM completely recovered at the 21st PSD. In group 3, The recovery couldn't happen at anytime. Changes in morphology: In group 1, the size of nerve fiber differed in facial canal segment and some of myelin sheath and axons degenerated at the 7th PSD; The fibres' degeneration turned into regeneration at the 21st PSD; In group 2, the morphologic changes in this group were familiar with the group 1 while the degenerated fibers were more and dispersed in transection at the 7th PSD; Regeneration of nerve fibers happened at the 21st PSD. In group 3, most of the fibers

  6. Wood identification of Dalbergia nigra (CITES Appendix I) using quantitative wood anatomy, principal components analysis and naive Bayes classification.

    PubMed

    Gasson, Peter; Miller, Regis; Stekel, Dov J; Whinder, Frances; Zieminska, Kasia

    2010-01-01

    Dalbergia nigra is one of the most valuable timber species of its genus, having been traded for over 300 years. Due to over-exploitation it is facing extinction and trade has been banned under CITES Appendix I since 1992. Current methods, primarily comparative wood anatomy, are inadequate for conclusive species identification. This study aims to find a set of anatomical characters that distinguish the wood of D. nigra from other commercially important species of Dalbergia from Latin America. Qualitative and quantitative wood anatomy, principal components analysis and naïve Bayes classification were conducted on 43 specimens of Dalbergia, eight D. nigra and 35 from six other Latin American species. Dalbergia cearensis and D. miscolobium can be distinguished from D. nigra on the basis of vessel frequency for the former, and ray frequency for the latter. Principal components analysis was unable to provide any further basis for separating the species. Naïve Bayes classification using the four characters: minimum vessel diameter; frequency of solitary vessels; mean ray width; and frequency of axially fused rays, classified all eight D. nigra correctly with no false negatives, but there was a false positive rate of 36.36 %. Wood anatomy alone cannot distinguish D. nigra from all other commercially important Dalbergia species likely to be encountered by customs officials, but can be used to reduce the number of specimens that would need further study.

  7. Hypoglossal-facial-jump-anastomosis without an interposition nerve graft.

    PubMed

    Beutner, Dirk; Luers, Jan C; Grosheva, Maria

    2013-10-01

    The hypoglossal-facial-anastomosis is the most often applied procedure for the reanimation of a long lasting peripheral facial nerve paralysis. The use of an interposition graft and its end-to-side anastomosis to the hypoglossal nerve allows the preservation of the tongue function and also requires two anastomosis sites and a free second donor nerve. We describe the modified technique of the hypoglossal-facial-jump-anastomosis without an interposition and present the first results. Retrospective case study. We performed the facial nerve reconstruction in five patients. The indication for the surgery was a long-standing facial paralysis with preserved portion distal to geniculate ganglion, absent voluntary activity in the needle facial electromyography, and an intact bilateral hypoglossal nerve. Following mastoidectomy, the facial nerve was mobilized in the fallopian canal down to its bifurcation in the parotid gland and cut in its tympanic portion distal to the lesion. Then, a tensionless end-to-side suture to the hypoglossal nerve was performed. The facial function was monitored up to 16 months postoperatively. The reconstruction technique succeeded in all patients: The facial function improved within the average time period of 10 months to the House-Brackmann score 3. This modified technique of the hypoglossal-facial reanimation is a valid method with good clinical results, especially in cases of a preserved intramastoidal facial nerve. Level 4. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  8. A greater decline in female facial attractiveness during middle age reflects women’s loss of reproductive value

    PubMed Central

    Maestripieri, Dario; Klimczuk, Amanda C. E.; Traficonte, Daniel M.; Wilson, M. Claire

    2014-01-01

    Facial attractiveness represents an important component of an individual’s overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35–50 years and 51–65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources. PMID:24592253

  9. Masseteric-facial nerve transposition for reanimation of the smile in incomplete facial paralysis.

    PubMed

    Hontanilla, Bernardo; Marre, Diego

    2015-12-01

    Incomplete facial paralysis occurs in about a third of patients with Bell's palsy. Although their faces are symmetrical at rest, when they smile they have varying degrees of disfigurement. Currently, cross-face nerve grafting is one of the most useful techniques for reanimation. Transfer of the masseteric nerve, although widely used for complete paralysis, has not to our knowledge been reported for incomplete palsy. Between December 2008 and November 2013, we reanimated the faces of 9 patients (2 men and 7 women) with incomplete unilateral facial paralysis with transposition of the masseteric nerve. Sex, age at operation, cause of paralysis, duration of denervation, recipient nerves used, and duration of follow-up were recorded. Commissural excursion, velocity, and patients' satisfaction were evaluated with the FACIAL CLIMA and a questionnaire, respectively. The mean (SD) age at operation was 39 (±6) years and the duration of denervation was 29 (±19) months. There were no complications that required further intervention. Duration of follow-up ranged from 6-26 months. FACIAL CLIMA showed improvement in both commissural excursion and velocity of more than two thirds in 6 patients, more than one half in 2 patients and less than one half in one. Qualitative evaluation showed a slight or pronounced improvement in 7/9 patients. The masseteric nerve is a reliable alternative for reanimation of the smile in patients with incomplete facial paralysis. Its main advantages include its consistent anatomy, a one-stage operation, and low morbidity at the donor site. Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  11. Microsurgical Resection of Glomus Jugulare Tumors With Facial Nerve Reconstruction: 3-Dimensional Operative Video.

    PubMed

    Cândido, Duarte N C; de Oliveira, Jean Gonçalves; Borba, Luis A B

    2018-05-08

    Paragangliomas are tumors originating from the paraganglionic system (autonomic nervous system), mostly found at the region around the jugular bulb, for which reason they are also termed glomus jugulare tumors (GJT). Although these lesions appear to be histologically benign, clinically they present with great morbidity, especially due to invasion of nearby structures such as the lower cranial nerves. These are challenging tumors, as they need complex approaches and great knowledge of the skull base. We present the case of a 31-year-old woman, operated by the senior author, with a 1-year history of tinnitus, vertigo, and progressive hearing loss, that evolved with facial nerve palsy (House-Brackmann IV) 2 months before surgery. Magnetic resonance imaging and computed tomography scans demonstrated a typical lesion with intense flow voids at the jugular foramen region with invasion of the petrous and tympanic bone, carotid canal, and middle ear, and extending to the infratemporal fossa (type C2 of Fisch's classification for GJT). During the procedure the mastoid part of the facial nerve was identified involved by tumor and needed to be resected. We also describe the technique for nerve reconstruction, using an interposition graft from the great auricular nerve, harvested at the beginning of the surgery. We achieved total tumor resection with a remarkable postoperative course. The patient also presented with facial function after 6 months. The patient consented with publication of her images.

  12. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    PubMed

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  13. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  14. Identification of Facial Shape by Applying Golden Ratio to the Facial Measurements: An Interracial Study in Malaysian Population

    PubMed Central

    Packiriswamy, Vasanthakumar; Kumar, Pramod; Rao, Mohandas

    2012-01-01

    Background: The “golden ratio” is considered as a universal facial aesthetical standard. Researcher's opinion that deviation from golden ratio can result in development of facial abnormalities. Aims: This study was designed to study the facial morphology and to identify individuals with normal, short, and long face. Materials and Methods: We studied 300 Malaysian nationality subjects aged 18-28 years of Chinese, Indian, and Malay extraction. The parameters measured were physiognomical facial height and width of face, and physiognomical facial index was calculated. Face shape was classified based on golden ratio. Independent t test was done to test the difference between sexes and among the races. Results: The mean values of the measurements and index showed significant sexual and interracial differences. Out of 300 subjects, the face shape was normal in 60 subjects, short in 224 subjects, and long in 16 subjects. Conclusion: As anticipated, the measurements showed variations according to gender and race. Only 60 subjects had a regular face shape, and remaining 240 subjects had irregular face shape (short and long). Since the short and long shape individuals may be at risk of developing various disorders, the knowledge of facial shapes in the given population is important for early diagnostic and treatment procedures. PMID:23272303

  15. Valproic Acid Promotes Survival of Facial Motor Neurons in Adult Rats After Facial Nerve Transection: a Pilot Study.

    PubMed

    Zhang, Lili; Fan, Zhaomin; Han, Yuechen; Xu, Lei; Liu, Wenwen; Bai, Xiaohui; Zhou, Meijuan; Li, Jianfeng; Wang, Haibo

    2018-04-01

    Valproic acid (VPA), a medication primarily used to treat epilepsy and bipolar disorder, has been applied to the repair of central and peripheral nervous system injury. The present study investigated the effect of VPA on functional recovery, survival of facial motor neurons (FMNs), and expression of proteins in rats after facial nerve trunk transection by functional measurement, Nissl staining, TUNEL, immunofluorescence, and Western blot. Following facial nerve injury, all rats in group VPA showed a better functional recovery, which was significant at the given time, compared with group NS. The Nissl staining results demonstrated that the number of FMNs survival in group VPA was higher than that in group normal saline (NS). TUNEL staining showed that axonal injury of facial nerve could lead to neuronal apoptosis of FMNs. But treatment of VPA significantly reduced cell apoptosis by decreasing the expression of Bax protein and increased neuronal survival by upregulating the level of brain-derived neurotrophic factor (BDNF) and growth associated protein-43 (GAP-43) expression in injured FMNs compared with group NS. Overall, our findings suggest that VPA may advance functional recovery, reduce lesion-induced apoptosis, and promote neuron survival after facial nerve transection in rats. This study provides an experimental evidence for better understanding the mechanism of injury and repair of peripheral facial paralysis.

  16. Delayed facial nerve decompression for Bell's palsy.

    PubMed

    Kim, Sang Hoon; Jung, Junyang; Lee, Jong Ha; Byun, Jae Yong; Park, Moon Suh; Yeo, Seung Geun

    2016-07-01

    Incomplete recovery of facial motor function continues to be long-term sequelae in some patients with Bell's palsy. The purpose of this study was to investigate the efficacy of transmastoid facial nerve decompression after steroid and antiviral treatment in patients with late stage Bell's palsy. Twelve patients underwent surgical decompression for Bell's palsy 21-70 days after onset, whereas 22 patients were followed up after steroid and antiviral therapy without decompression. Surgical criteria included greater than 90 % degeneration on electroneuronography and no voluntary electromyography potentials. This study was a retrospective study of electrodiagnostic data and medical chart review between 2006 and 2013. Recovery from facial palsy was assessed using the House-Brackmann grading system. Final recovery rate did not differ significantly in the two groups; however, all patients in the decompression group recovered to at least House-Brackmann grade III at final follow-up. Although postoperative hearing threshold was increased in both groups, there was no significant between group difference in hearing threshold. Transmastoid decompression of the facial nerve in patients with severe late stage Bell's palsy at risk for a poor facial nerve outcome reduced severe complications of facial palsy with minimal morbidity.

  17. Searching for proprioceptors in human facial muscles.

    PubMed

    Cobo, Juan L; Abbate, Francesco; de Vicente, Juan C; Cobo, Juan; Vega, José A

    2017-02-15

    The human craniofacial muscles innervated by the facial nerve typically lack muscle spindles. However these muscles have proprioception that participates in the coordination of facial movements. A functional substitution of facial proprioceptors by cutaneous mechanoreceptors has been proposed but at present this alternative has not been demonstrated. Here we have investigated whether other kinds of sensory structures are present in two human facial muscles (zygomatic major and buccal). Human checks were removed from Spanish cadavers, and processed for immunohistochemical detection of nerve fibers (neurofilament proteins and S100 protein) and two putative mechanoproteins (acid-sensing ion channel 2 and transient receptor potential vanilloid 4) associated with mechanosensing. Nerves of different calibers were found in the connective septa and within the muscle itself. In all the muscles analysed, capsular corpuscle-like structures resembling elongated or round Ruffini-like corpuscles were observed. Moreover the axon profiles within these structures displayed immunoreactivity for both putative mechanoproteins. The present results demonstrate the presence of sensory structures in facial muscles that can substitute for typical muscle spindles as the source of facial proprioception. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Facial biometrics of peri-oral changes in Crohn's disease.

    PubMed

    Zou, L; Adegun, O K; Willis, A; Fortune, Farida

    2014-05-01

    Crohn's disease is a chronic relapsing and remitting inflammatory condition which affects any part of the gastrointestinal tract. In the oro-facial region, patients can present peri-oral swellings which results in severe facial disfigurement. To date, assessing the degree of facial changes and evaluation of treatment outcomes relies on clinical observation and semi-quantitative methods. In this paper, we describe the development of a robust and reproducible measurement strategy using 3-D facial biometrics to objectively quantify the extent and progression of oro-facial Crohn's disease. Using facial laser scanning, 32 serial images from 13 Crohn's patients attending the Oral Medicine clinic were acquired during relapse, remission, and post-treatment phases. Utilising theories of coordinate metrology, the facial images were subjected to registration, regions of interest identification, and reproducible repositioning prior to obtaining volume measurements. To quantify the changes in tissue volume, scan images from consecutive appointments were compared to the baseline (first scan image). Reproducibility test was performed to ascertain the degree of uncertainty in volume measurements. 3-D facial biometric imaging is a reliable method to identify and quantify peri-oral swelling in Crohn's patients. Comparison of facial scan images at different phases of the disease revealed precisely profile and volume changes. The volume measurements were highly reproducible as adjudged from the 1% standard deviation. 3-D facial biometrics measurements in Crohn's patients with oro-facial involvement offers a quick, robust, economical and objective approach for guided therapeutic intervention and routine assessment of treatment efficacy on the clinic.

  19. Sensitive and Motor Neuroanastomosis After Facial Trauma.

    PubMed

    Ribeiro-Junior, Paulo Domingos; Senko, Ricardo Alexandre Galdioli; Mendes, Gabriel Cury Batista; Peres, Fernando Gianzanti

    2016-10-01

    Facial nerve has great functional and aesthetic importance to the face, and damage to its structure can lead to major complications. This article reports a clinical case of neuroanastomosis of the facial nerve after facial trauma, describing surgical procedure and postoperative follow-up. A trauma patient with extensive injury cut in right mandibular body causing neurotmesis of the VIIth cranial nerve and mandibular angle fracture right side was treated. During surgical exploration, the nerve segments were identified and a neuroanastomosis was performed using nylon 10-0, after reduction and internal fixation of the mandibular fracture. Postoperatively, an 8-month follow-up showed good evolution and preservation of motor function of the muscles of facial mime, highlighting the success of the surgical treatment. Nerve damage because of facial trauma can be a surgical treatment challenge, but when properly conducted can functionally restore the damaged nerve.

  20. Facial patterns in a tropical social wasp correlate with colony membership

    NASA Astrophysics Data System (ADS)

    Baracchi, David; Turillazzi, Stefano; Chittka, Lars

    2016-10-01

    Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.

  1. Periocular Reconstruction in Patients with Facial Paralysis.

    PubMed

    Joseph, Shannon S; Joseph, Andrew W; Douglas, Raymond S; Massry, Guy G

    2016-04-01

    Facial paralysis can result in serious ocular consequences. All patients with orbicularis oculi weakness in the setting of facial nerve injury should undergo a thorough ophthalmologic evaluation. The main goal of management in these patients is to protect the ocular surface and preserve visual function. Patients with expected recovery of facial nerve function may only require temporary and conservative measures to protect the ocular surface. Patients with prolonged or unlikely recovery of facial nerve function benefit from surgical rehabilitation of the periorbital complex. Current reconstructive procedures are most commonly intended to improve coverage of the eye but cannot restore blink. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Sad Facial Expressions Increase Choice Blindness

    PubMed Central

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2018-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions). PMID:29358926

  3. Sad Facial Expressions Increase Choice Blindness.

    PubMed

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  4. Allergenic Ingredients in Facial Wet Wipes.

    PubMed

    Aschenbeck, Kelly A; Warshaw, Erin M

    Allergic contact dermatitis commonly occurs on the face. Facial cleansing wipes may be an underrecognized source of allergens. The aim of this study was to determine the frequency of potentially allergenic ingredients in facial wet wipes. Ingredient lists from name brand and generic facial wipes from 4 large retailers were analyzed. In the 178 facial wipes examined, a total of 485 ingredients were identified (average, 16.7 ingredients per wipe). Excluding botanicals, the top 15 potentially allergenic ingredients were glycerin (64.0%), fragrance (63.5%), phenoxyethanol (53.9%), citric acid (51.1%), disodium EDTA (44.4%), sorbic acid derivatives (39.3%), tocopherol derivatives (38.8%), polyethylene glycol derivatives (32.6%), glyceryl stearate (31.5%), sodium citrate (29.8%), glucosides (27.5%), cetearyl alcohol (25.8%), propylene glycol (25.3%), sodium benzoate (24.2%), and ceteareth-20 (23.6%)/parabens (23.6%). Of note, methylisothiazolinone (2.2%) and methylchloroisothiazolinone (1.1%) were uncommon. The top potential allergens of botanical origin included Aloe barbadensis (41.0%), chamomile extracts (27.0%), tea extracts (21.3%), Cucumis sativus (20.2%), and Hamamelis virginiana (10.7%). Many potential allergens are present in facial wet wipes, including fragrances, preservatives, botanicals, glucosides, and propylene glycol.

  5. Perception of health from facial cues

    PubMed Central

    Henderson, Audrey J.; Holzleitner, Iris J.; Talamas, Sean N.

    2016-01-01

    Impressions of health are integral to social interactions, yet poorly understood. A review of the literature reveals multiple facial characteristics that potentially act as cues to health judgements. The cues vary in their stability across time: structural shape cues including symmetry and sexual dimorphism alter slowly across the lifespan and have been found to have weak links to actual health, but show inconsistent effects on perceived health. Facial adiposity changes over a medium time course and is associated with both perceived and actual health. Skin colour alters over a short time and has strong effects on perceived health, yet links to health outcomes have barely been evaluated. Reviewing suggested an additional influence of demeanour as a perceptual cue to health. We, therefore, investigated the association of health judgements with multiple facial cues measured objectively from two-dimensional and three-dimensional facial images. We found evidence for independent contributions of face shape and skin colour cues to perceived health. Our empirical findings: (i) reinforce the role of skin yellowness; (ii) demonstrate the utility of global face shape measures of adiposity; and (iii) emphasize the role of affect in facial images with nominally neutral expression in impressions of health. PMID:27069057

  6. [Peripheral paralysis of facial nerve in children].

    PubMed

    Steczkowska-Klucznik, Małgorzata; Kaciński, Marek

    2006-01-01

    Peripheral facial paresis is one of the most common diagnosed neuropathies in adults and also in children. Many factors can trigger facial paresis and most frequent are infectious, carcinoma and demyelinisation diseases. Very important and interesting problem is an idiopathic facial paresis (Bell's palsy). Actually the main target of scientific research is to assess the etiology (infectious, genetic, immunologic) and to find the most appropriate treatment.

  7. A comparison of facial expression properties in five hylobatid species.

    PubMed

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-07-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobatid species belonging to three different genera using a recently developed objective coding system, the Facial Action Coding System for hylobatid species (GibbonFACS). We described three important properties of their facial expressions and compared them between genera. First, we compared the rate of facial expressions, which was defined as the number of facial expressions per units of time. Second, we compared their repertoire size, defined as the number of different types of facial expressions used, independent of their frequency. Third, we compared the diversity of expression, defined as the repertoire weighted by the rate of use for each type of facial expression. We observed a higher rate and diversity of facial expression, but no larger repertoire, in Symphalangus (siamangs) compared to Hylobates and Nomascus species. In line with previous research, these results suggest siamangs differ from other hylobatids in certain aspects of their social behavior. To investigate whether differences in facial expressions are linked to hylobatid socio-ecology, we used a Phylogenetic General Least Square (PGLS) regression analysis to correlate those properties with two social factors: group-size and level of monogamy. No relationship between the properties of facial expressions and these socio-ecological factors was found. One explanation could be that facial expressions in hylobatid species are subject to phylogenetic inertia and do not differ sufficiently between species to reveal correlations with factors such as group size and monogamy level. © 2014 Wiley Periodicals, Inc.

  8. Information processing of motion in facial expression and the geometry of dynamical systems

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  9. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    PubMed Central

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  10. FGF–2 is required to prevent astrogliosis in the facial nucleus after facial nerve injury and mechanical stimulation of denervated vibrissal muscles

    PubMed Central

    Hizay, Arzu; Seitz, Mark; Grosheva, Maria; Sinis, Nektarios; Kaya, Yasemin; Bendella, Habib; Sarikcioglu, Levent; Dunlop, Sarah A.; Angelov, Doychin N.

    2016-01-01

    Abstract Recently, we have shown that manual stimulation of paralyzed vibrissal muscles after facial-facial anastomosis reduced the poly-innervation of neuromuscular junctions and restored vibrissal whisking. Using gene knock outs, we found a differential dependence of manual stimulation effects on growth factors. Thus, insulin-like growth factor-1 and brain-derived neurotrophic factor are required to underpin manual stimulation-mediated improvements, whereas FGF-2 is not. The lack of dependence on FGF-2 in mediating these peripheral effects prompted us to look centrally, i.e. within the facial nucleus where increased astrogliosis after facial-facial anastomosis follows "synaptic stripping". We measured the intensity of Cy3-fluorescence after immunostaining for glial fibrillary acidic protein (GFAP) as an indirect indicator of synaptic coverage of axotomized neurons in the facial nucleus of mice lacking FGF-2 (FGF-2-/- mice). There was no difference in GFAP-Cy3-fluorescence (pixel number, gray value range 17–103) between intact wildtype mice (2.12± 0.37×107) and their intact FGF-2-/- counterparts (2.12± 0.27×107) nor after facial-facial anastomosis +handling (wildtype: 4.06± 0.32×107; FGF-2-/-: 4.39±0.17×107). However, after facial-facial anastomosis, GFAP-Cy3-fluorescence remained elevated in FGF-2-/--animals (4.54±0.12×107), whereas manual stimulation reduced the intensity of GFAP-immunofluorescence in wild type mice to values that were not significantly different from intact mice (2.63± 0.39×10 ). We conclude that FGF-2 is not required to underpin the beneficial effects of manual stimulation at the neuro-muscular junction, but it is required to minimize astrogliosis in the brainstem and, by implication, restore synaptic coverage of recovering facial motoneurons. PMID:28276669

  11. Spontaneous and posed facial expression in Parkinson's disease.

    PubMed

    Smith, M C; Smith, M K; Ellgring, H

    1996-09-01

    Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.

  12. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  13. Acupuncture treatment of facial palsy.

    PubMed

    Bokhari, Syed Zahid Hussain; Zahid, Syeda Samina

    2010-01-01

    Bell's palsy is an idiopathic, acute peripheral-nerve palsy involving the facial nerve which supplies all the muscles of facial expression. This study was conducted to evaluate the effects of electro-A=acupuncture on patients with facial palsy. This study was conducted on patients with facial palsy at a private clinic at Peshawar during 1999-2009, and 49 cases were included in the study. All those cases that were within first two weeks of illness or who had related history of stroke or they had upper motor neuron lesion were not included in the study. Electroacupuncture was used as the main therapeutic technique to treat these cases. Patients were subjected to acupuncture treatment at four major points on the face for 20-25 minutes everyday for 10 days. Specific points were used for nasolabial fold and watering of the eye. After rest for a week patients were again evaluated and another course of treatment comprising of 5-10 days was sufficient in most cases. Frequency of electro-acupuncture is kept at 60-80 cycles per minute. Total number of patients studied was 49 with duration of illness as early as 3 weeks to a year and above. Cases with duration of illness from 3 weeks onward showed rapid recovery of palsy symptoms with electro-acupuncture. All cases showed recovery. Palsy of the angle of the mouth did not recover completely. Electro-acupuncture is effective in treating facial palsy cases.

  14. Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.

    PubMed

    Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L

    2010-09-01

    To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.

  15. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women.

    PubMed

    Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects' Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and

  16. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women

    PubMed Central

    Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects’ Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and

  17. Categorical Perception of Affective and Linguistic Facial Expressions

    ERIC Educational Resources Information Center

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  18. Facial First Impressions Across Culture: Data-Driven Modeling of Chinese and British Perceivers' Unconstrained Facial Impressions.

    PubMed

    Sutherland, Clare A M; Liu, Xizi; Zhang, Lingshan; Chu, Yingtung; Oldmeadow, Julian A; Young, Andrew W

    2018-04-01

    People form first impressions from facial appearance rapidly, and these impressions can have considerable social and economic consequences. Three dimensions can explain Western perceivers' impressions of Caucasian faces: approachability, youthful-attractiveness, and dominance. Impressions along these dimensions are theorized to be based on adaptive cues to threat detection or sexual selection, making it likely that they are universal. We tested whether the same dimensions of facial impressions emerge across culture by building data-driven models of first impressions of Asian and Caucasian faces derived from Chinese and British perceivers' unconstrained judgments. We then cross-validated the dimensions with computer-generated average images. We found strong evidence for common approachability and youthful-attractiveness dimensions across perceiver and face race, with some evidence of a third dimension akin to capability. The models explained ~75% of the variance in facial impressions. In general, the findings demonstrate substantial cross-cultural agreement in facial impressions, especially on the most salient dimensions.

  19. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  20. Facial palsy: what can the multidisciplinary team do?

    PubMed Central

    Butler, Daniel P; Grobbelaar, Adriaan O

    2017-01-01

    The functional and psychosocial impact of facial paralysis on the patient is significant. In response, a broad spectrum of treatment options exist and are provided by a multitude of health care practitioners. The cause and duration of the facial weakness can vary widely and the optimal care pathway varies. To optimize patient outcome, those involved in the care of patients with facial palsy should collaborate within comprehensive multidisciplinary teams (MDTs). At an international level, those involved in the care of patients with facial paralysis should aim to create standardized guidelines on which outcome domains matter most to patients to aid the identification of high quality care. This review summarizes the causes and treatment options for facial paralysis and discusses the subsequent importance of multidisciplinary care in the management of patients with this condition. Further discussion is given to the extended role of the MDT in determining what constitutes quality in facial palsy care to aid the creation of accepted care pathways and delineate best practice. PMID:29026314