Sample records for robust facial component

  1. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  2. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  3. Facial and semantic emotional interference: A pilot study on the behavioral and cortical responses to the dual valence association task

    PubMed Central

    2011-01-01

    Background Integration of compatible or incompatible emotional valence and semantic information is an essential aspect of complex social interactions. A modified version of the Implicit Association Test (IAT) called Dual Valence Association Task (DVAT) was designed in order to measure conflict resolution processing from compatibility/incompatibly of semantic and facial valence. The DVAT involves two emotional valence evaluative tasks which elicits two forms of emotional compatible/incompatible associations (facial and semantic). Methods Behavioural measures and Event Related Potentials were recorded while participants performed the DVAT. Results Behavioural data showed a robust effect that distinguished compatible/incompatible tasks. The effects of valence and contextual association (between facial and semantic stimuli) showed early discrimination in N170 of faces. The LPP component was modulated by the compatibility of the DVAT. Conclusions Results suggest that DVAT is a robust paradigm for studying the emotional interference effect in the processing of simultaneous information from semantic and facial stimuli. PMID:21489277

  4. Sparse coding for flexible, robust 3D facial-expression synthesis.

    PubMed

    Lin, Yuxu; Song, Mingli; Quynh, Dao Thi Phuong; He, Ying; Chen, Chun

    2012-01-01

    Computer animation researchers have been extensively investigating 3D facial-expression synthesis for decades. However, flexible, robust production of realistic 3D facial expressions is still technically challenging. A proposed modeling framework applies sparse coding to synthesize 3D expressive faces, using specified coefficients or expression examples. It also robustly recovers facial expressions from noisy and incomplete data. This approach can synthesize higher-quality expressions in less time than the state-of-the-art techniques.

  5. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  6. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  7. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    PubMed

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial axis and secondarily across the vertical axis. Published by Elsevier Ltd.

  8. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  9. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  10. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  11. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  12. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  13. Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation.

    PubMed

    Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan

    2016-06-01

    Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.

  14. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  15. Posed versus spontaneous facial expressions are modulated by opposite cerebral hemispheres.

    PubMed

    Ross, Elliott D; Pulusu, Vinay K

    2013-05-01

    Clinical research has indicated that the left face is more expressive than the right face, suggesting that modulation of facial expressions is lateralized to the right hemisphere. The findings, however, are controversial because the results explain, on average, approximately 4% of the data variance. Using high-speed videography, we sought to determine if movement-onset asymmetry was a more powerful research paradigm than terminal movement asymmetry. The results were very robust, explaining up to 70% of the data variance. Posed expressions began overwhelmingly on the right face whereas spontaneous expressions began overwhelmingly on the left face. This dichotomy was most robust for upper facial expressions. In addition, movement-onset asymmetries did not predict terminal movement asymmetries, which were not significantly lateralized. The results support recent neuroanatomic observations that upper versus lower facial movements have different forebrain motor representations and recent behavioral constructs that posed versus spontaneous facial expressions are modulated preferentially by opposite cerebral hemispheres and that spontaneous facial expressions are graded rather than non-graded movements. Published by Elsevier Ltd.

  16. Facial biometrics of peri-oral changes in Crohn's disease.

    PubMed

    Zou, L; Adegun, O K; Willis, A; Fortune, Farida

    2014-05-01

    Crohn's disease is a chronic relapsing and remitting inflammatory condition which affects any part of the gastrointestinal tract. In the oro-facial region, patients can present peri-oral swellings which results in severe facial disfigurement. To date, assessing the degree of facial changes and evaluation of treatment outcomes relies on clinical observation and semi-quantitative methods. In this paper, we describe the development of a robust and reproducible measurement strategy using 3-D facial biometrics to objectively quantify the extent and progression of oro-facial Crohn's disease. Using facial laser scanning, 32 serial images from 13 Crohn's patients attending the Oral Medicine clinic were acquired during relapse, remission, and post-treatment phases. Utilising theories of coordinate metrology, the facial images were subjected to registration, regions of interest identification, and reproducible repositioning prior to obtaining volume measurements. To quantify the changes in tissue volume, scan images from consecutive appointments were compared to the baseline (first scan image). Reproducibility test was performed to ascertain the degree of uncertainty in volume measurements. 3-D facial biometric imaging is a reliable method to identify and quantify peri-oral swelling in Crohn's patients. Comparison of facial scan images at different phases of the disease revealed precisely profile and volume changes. The volume measurements were highly reproducible as adjudged from the 1% standard deviation. 3-D facial biometrics measurements in Crohn's patients with oro-facial involvement offers a quick, robust, economical and objective approach for guided therapeutic intervention and routine assessment of treatment efficacy on the clinic.

  17. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach

    PubMed Central

    Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi

    2017-01-01

    While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser. PMID:28522979

  18. Supradural inflammatory soup in awake and freely moving rats induces facial allodynia that is blocked by putative immune modulators.

    PubMed

    Wieseler, Julie; Ellis, Amanda; McFadden, Andrew; Stone, Kendra; Brown, Kimberley; Cady, Sara; Bastos, Leandro F; Sprunger, David; Rezvani, Niloofar; Johnson, Kirk; Rice, Kenner C; Maier, Steven F; Watkins, Linda R

    2017-06-01

    Facial allodynia is a migraine symptom that is generally considered to represent a pivotal point in migraine progression. Treatment before development of facial allodynia tends to be more successful than treatment afterwards. As such, understanding the underlying mechanisms of facial allodynia may lead to a better understanding of the mechanisms underlying migraine. Migraine facial allodynia is modeled by applying inflammatory soup (histamine, bradykinin, serotonin, prostaglandin E2) over the dura. Whether glial and/or immune activation contributes to such pain is unknown. Here we tested if trigeminal nucleus caudalis (Sp5C) glial and/or immune cells are activated following supradural inflammatory soup, and if putative glial/immune inhibitors suppress the consequent facial allodynia. Inflammatory soup was administered via bilateral indwelling supradural catheters in freely moving rats, inducing robust and reliable facial allodynia. Gene expression for microglial/macrophage activation markers, interleukin-1β, and tumor necrosis factor-α increased following inflammatory soup along with robust expression of facial allodynia. This provided the basis for pursuing studies of the behavioral effects of 3 diverse immunomodulatory drugs on facial allodynia. Pretreatment with either of two compounds broadly used as putative glial/immune inhibitors (minocycline, ibudilast) prevented the development of facial allodynia, as did treatment after supradural inflammatory soup but prior to the expression of facial allodynia. Lastly, the toll-like receptor 4 (TLR4) antagonist (+)-naltrexone likewise blocked development of facial allodynia after supradural inflammatory soup. Taken together, these exploratory data support that activated glia and/or immune cells may drive the development of facial allodynia in response to supradural inflammatory soup in unanesthetized male rats. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  20. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  1. Wireless electronic-tattoo for long-term high fidelity facial muscle recordings

    NASA Astrophysics Data System (ADS)

    Inzelberg, Lilah; David Pur, Moshe; Steinberg, Stanislav; Rand, David; Farah, Maroun; Hanein, Yael

    2017-05-01

    Facial surface electromyography (sEMG) is a powerful tool for objective evaluation of human facial expressions and was accordingly suggested in recent years for a wide range of psychological and neurological assessment applications. Owing to technical challenges, in particular the cumbersome gelled electrodes, the use of facial sEMG was so far limited. Using innovative facial temporary tattoos optimized specifically for facial applications, we demonstrate the use of sEMG as a platform for robust identification of facial muscle activation. In particular, differentiation between diverse facial muscles is demonstrated. We also demonstrate a wireless version of the system. The potential use of the presented technology for user-experience monitoring and objective psychological and neurological evaluations is discussed.

  2. Recognizing Facial Slivers.

    PubMed

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  3. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  4. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  5. Is it possible to define the ideal lips?

    PubMed

    Kar, M; Muluk, N B; Bafaqeeh, S A; Cingi, C

    2018-02-01

    The lips are an essential component of the symmetry and aesthetics of the face. Cosmetic surgery to modify the lips has recently gained in popularity, but the results are in some cases disasterous. In this review, we describe the features of the ideal lips for an individual's face. The features of the ideal lips with respect to facial anatomy, important anatomical landmarks of the face, the facial proportions of the lips and ethnic and sexual differences are described. The projection and relative sizes of the upper and lower lips are as significant to lip aesthetics as the proportion of the lips to the rest of the facial structure. Robust, pouty lips are considered to be sexually attractive by both males and females. Horizontal thirds and the golden ratio describe the proportions that contribute to the beauty and attractiveness of the lips. In young Caucasians, the ideal ratio of the vertical height of the upper lip to that of the lower lip is 1:1.6. Blacks, genetically, have a greater lip volume. The shape and volume of a person's lips are of great importance in the perception of beauty by humans. The appearance of the lips in part determines the attractiveness of a person's face. In females, fuller lips in relation to facial width as well as greater vermilion height are considered to be attractive. Copyright © 2018 Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  6. Toward identifying specification requirements for digital bone-anchored prosthesis design incorporating substructure fabrication: a pilot study.

    PubMed

    Eggbeer, Dominic; Bibb, Richard; Evans, Peter

    2006-01-01

    This paper is the first in a series that aims to identify the specification requirements for advanced digital technologies that may be used to design and fabricate complex, soft tissue facial prostheses. Following a review of previously reported techniques, appropriate and currently available technologies were selected and applied in a pilot study. This study uses a range of optical surface scanning, computerized tomography, computer-aided design, and rapid prototyping technologies to capture, design, and fabricate a bone-anchored auricular prosthesis, including the retentive components. The techniques are assessed in terms of their effectiveness, and the results are used to identify future research and specification requirements to direct developments. The case study identifies that while digital technologies may be used to design implant-retained facial prostheses, many limitations need to be addressed to make the techniques clinically viable. It also identifies the need to develop a more robust specification that covers areas such as resolution, accuracy, materials, and design, against which potential technologies may be assessed. There is a need to develop a specification against which potential technologies may be assessed for their suitability in soft tissue facial prosthetics. The specification will be developed using further experimental research studies.

  7. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  8. Facial expression recognition based on improved deep belief networks

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  9. Individual differences in the recognition of facial expressions: an event-related potentials study.

    PubMed

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  10. Moving Faces

    ERIC Educational Resources Information Center

    Journal of College Science Teaching, 2005

    2005-01-01

    A recent study by Zara Ambadar and Jeffrey F. Cohn of the University of Pittsburgh and Jonathan W. Schooler of the University of British Columbia, examined how motion affects people's judgment of subtle facial expressions. Two experiments demonstrated robust effects of motion in facilitating the perception of subtle facial expressions depicting…

  11. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  12. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  13. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  14. Static facial expression recognition with convolution neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  15. Gender discrimination and prediction on the basis of facial metric information.

    PubMed

    Fellous, J M

    1997-07-01

    Horizontal and vertical facial measurements are statistically independent. Discriminant analysis shows that five of such normalized distances explain over 95% of the gender differences of "training" samples and predict the gender of 90% novel test faces exhibiting various facial expressions. The robustness of the method and its results are assessed. It is argued that these distances (termed fiducial) are compatible with those found experimentally by psychophysical and neurophysiological studies. In consequence, partial explanations for the effects observed in these experiments can be found in the intrinsic statistical nature of the facial stimuli used.

  16. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    PubMed

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  17. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  18. Enhanced subliminal emotional responses to dynamic facial expressions.

    PubMed

    Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi

    2014-01-01

    Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.

  19. Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2017-06-01

    The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming-and further iterations of the method could provide a robust, easy to implement, and cost-effective supplement to traditional patient identification methods. © 2017 American Association of Physicists in Medicine.

  20. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  1. The face-selective N170 component is modulated by facial color.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Nakauchi, Shigeki

    2012-08-01

    Faces play an important role in social interaction by conveying information and emotion. Of the various components of the face, color particularly provides important clues with regard to perception of age, sex, health status, and attractiveness. In event-related potential (ERP) studies, the N170 component has been identified as face-selective. To determine the effect of color on face processing, we investigated the modulation of N170 by facial color. We recorded ERPs while subjects viewed facial color stimuli at 8 hue angles, which were generated by rotating the original facial color distribution around the white point by 45° for each human face. Responses to facial color were localized to the left, but not to the right hemisphere. N170 amplitudes gradually increased in proportion to the increase in hue angle from the natural-colored face. This suggests that N170 amplitude in the left hemisphere reflects processing of facial color information. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. The assessment of facial variation in 4747 British school children.

    PubMed

    Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen

    2012-12-01

    The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.

  3. How components of facial width to height ratio differently contribute to the perception of social traits

    PubMed Central

    Lio, Guillaume; Gomez, Alice; Sirigu, Angela

    2017-01-01

    Facial width to height ratio (fWHR) is a morphological cue that correlates with sexual dimorphism and social traits. Currently, it is unclear how vertical and horizontal components of fWHR, distinctly capture faces’ social information. Using a new methodology, we orthogonally manipulated the upper facial height and the bizygomatic width to test their selective effect in the formation of impressions. Subjects (n = 90) saw pair of faces and had to select the face expressing better different social traits (trustworthiness, aggressiveness and femininity). We further investigated how sex and fWHR components interact in the formation of these judgements. Across experiments, changes along the vertical component better predicted participants' ratings rather than the horizontal component. Faces with smaller height were perceived as less trustworthy, less feminine and more aggressive. By dissociating fWHR and testing the contribution of its components independently, we obtained a powerful and discriminative measure of how facial morphology guides social judgements. PMID:28235081

  4. Motion-artifact-robust, polarization-resolved second-harmonic-generation microscopy based on rapid polarization switching with electro-optic Pockells cell and its application to in vivo visualization of collagen fiber orientation in human facial skin

    PubMed Central

    Tanaka, Yuji; Hase, Eiji; Fukushima, Shuichiro; Ogura, Yuki; Yamashita, Toyonobu; Hirao, Tetsuji; Araki, Tsutomu; Yasui, Takeshi

    2014-01-01

    Polarization-resolved second-harmonic-generation (PR-SHG) microscopy is a powerful tool for investigating collagen fiber orientation quantitatively with low invasiveness. However, the waiting time for the mechanical polarization rotation makes it too sensitive to motion artifacts and hence has hampered its use in various applications in vivo. In the work described in this article, we constructed a motion-artifact-robust, PR-SHG microscope based on rapid polarization switching at every pixel with an electro-optic Pockells cell (PC) in synchronization with step-wise raster scanning of the focus spot and alternate data acquisition of a vertical-polarization-resolved SHG signal and a horizontal-polarization-resolved one. The constructed PC-based PR-SHG microscope enabled us to visualize orientation mapping of dermal collagen fiber in human facial skin in vivo without the influence of motion artifacts. Furthermore, it implied the location and/or age dependence of the collagen fiber orientation in human facial skin. The robustness to motion artifacts in the collagen orientation measurement will expand the application scope of SHG microscopy in dermatology and collagen-related fields. PMID:24761292

  5. The relationship of cranial, orbital and nasal cavity size with the morphology of the supraorbital region in modern Homo sapiens.

    PubMed

    Nowaczewska, Wioletta; Łapicka, Urszula; Cieślik, Agata; Biecek, Przemysław

    2017-09-01

    Morphological variation of the supraorbital region (SR) in human crania has been investigated and its potential sources suggested, along with the importance of the size of the facial skeleton, neurocranium, and orbit for the formation of this region. However, previous studies have not indicated whether facial size exhibits a stronger association with SR robusticity than neurocranial size or sex; moreover, the association between orbital volume and SR robusticity has been analysed only in non-human primate skulls. In this study we investigate whether the size of the facial skeleton, neurocranium, two measures of relative orbital size (orbital volume and estimated orbital aperture area), the relative size of the nasal cavity, and the relative estimated area of the anterior nasal cavity opening are related to SR robusticity; we also examine which of these analysed relationships is strongest, as well as independent of the influence of the other traits, in a geographically diverse modern human cranial sample. The results of Spearman's rank and partial rank correlations (encompassing models including or excluding sex and geographic origin) show a relationship between most of the above-mentioned variables and SR robusticity, with the exception of the estimated relative area of the orbital opening (in the case of the results of Spearman's rank correlations) and the traits of the nasal cavity. Of all the analysed traits, sex appears to be the most important for the formation of SR robusticity and, of two measures of cranial size, neurocranial size was the most significant. The strong relationship between SR robusticity and relative orbital volume was observed in models without the geographic origin factor. The results concerning analysed models suggest the influence of this factor on this relationship; however, to explain this influence, further studies are needed.

  6. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  7. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  8. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  9. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    ERIC Educational Resources Information Center

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  10. Signatures of personality on dense 3D facial images.

    PubMed

    Hu, Sile; Xiong, Jieyi; Fu, Pengcheng; Qiao, Lu; Tan, Jingze; Jin, Li; Tang, Kun

    2017-03-06

    It has long been speculated that cues on the human face exist that allow observers to make reliable judgments of others' personality traits. However, direct evidence of association between facial shapes and personality is missing from the current literature. This study assessed the personality attributes of 834 Han Chinese volunteers (405 males and 429 females), utilising the five-factor personality model ('Big Five'), and collected their neutral 3D facial images. Dense anatomical correspondence was established across the 3D facial images in order to allow high-dimensional quantitative analyses of the facial phenotypes. In this paper, we developed a Partial Least Squares (PLS) -based method. We used composite partial least squares component (CPSLC) to test association between the self-tested personality scores and the dense 3D facial image data, then used principal component analysis (PCA) for further validation. Among the five personality factors, agreeableness and conscientiousness in males and extraversion in females were significantly associated with specific facial patterns. The personality-related facial patterns were extracted and their effects were extrapolated on simulated 3D facial models.

  11. Cognitive behavioural therapy attenuates the enhanced early facial stimuli processing in social anxiety disorders: an ERP investigation.

    PubMed

    Cao, Jianqin; Liu, Quanying; Li, Yang; Yang, Jun; Gu, Ruolei; Liang, Jin; Qi, Yanyan; Wu, Haiyan; Liu, Xun

    2017-07-28

    Previous studies of patients with social anxiety have demonstrated abnormal early processing of facial stimuli in social contexts. In other words, patients with social anxiety disorder (SAD) tend to exhibit enhanced early facial processing when compared to healthy controls. Few studies have examined the temporal electrophysiological event-related potential (ERP)-indexed profiles when an individual with SAD compares faces to objects in SAD. Systematic comparisons of ERPs to facial/object stimuli before and after therapy are also lacking. We used a passive visual detection paradigm with upright and inverted faces/objects, which are known to elicit early P1 and N170 components, to study abnormal early face processing and subsequent improvements in this measure in patients with SAD. Seventeen patients with SAD and 17 matched control participants performed a passive visual detection paradigm task while undergoing EEG. The healthy controls were compared to patients with SAD pre-therapy to test the hypothesis that patients with SAD have early hypervigilance to facial cues. We compared patients with SAD before and after therapy to test the hypothesis that the early hypervigilance to facial cues in patients with SAD can be alleviated. Compared to healthy control (HC) participants, patients with SAD had more robust P1-N170 slope but no amplitude effects in response to both upright and inverted faces and objects. Interestingly, we found that patients with SAD had reduced P1 responses to all objects and faces after therapy, but had selectively reduced N170 responses to faces, and especially inverted faces. Interestingly, the slope from P1 to N170 in patients with SAD was flatter post-therapy than pre-therapy. Furthermore, the amplitude of N170 evoked by the facial stimuli was correlated with scores on the interaction anxiousness scale (IAS) after therapy. Our results did not provide electrophysiological support for the early hypervigilance hypothesis in SAD to faces, but confirm that cognitive-behavioural therapy can reduce the early visual processing of faces. These findings have potentially important therapeutic implications in the assessment and treatment of social anxiety. Trial registration HEBDQ2014021.

  12. Coherence explored between emotion components: evidence from event-related potentials and facial electromyography.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2014-04-01

    Componential theories assume that emotion episodes consist of emergent and dynamic response changes to relevant events in different components, such as appraisal, physiology, motivation, expression, and subjective feeling. In particular, Scherer's Component Process Model hypothesizes that subjective feeling emerges when the synchronization (or coherence) of appraisal-driven changes between emotion components has reached a critical threshold. We examined the prerequisite of this synchronization hypothesis for appraisal-driven response changes in facial expression. The appraisal process was manipulated by using feedback stimuli, presented in a gambling task. Participants' responses to the feedback were investigated in concurrently recorded brain activity related to appraisal (event-related potentials, ERP) and facial muscle activity (electromyography, EMG). Using principal component analysis, the prediction of appraisal-driven response changes in facial EMG was examined. Results support this prediction: early cognitive processes (related to the feedback-related negativity) seem to primarily affect the upper face, whereas processes that modulate P300 amplitudes tend to predominantly drive cheek region responses. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  14. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component.

    PubMed

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.

  15. Facial Cosmetics Exert a Greater Influence on Processing of the Mouth Relative to the Eyes: Evidence from the N170 Event-Related Potential Component

    PubMed Central

    Tanaka, Hideaki

    2016-01-01

    Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161

  16. The Right Place at the Right Time: Priming Facial Expressions with Emotional Face Components in Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-01-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446

  17. Face and emotion expression processing and the serotonin transporter polymorphism 5-HTTLPR/rs22531.

    PubMed

    Hildebrandt, A; Kiy, A; Reuter, M; Sommer, W; Wilhelm, O

    2016-06-01

    Face cognition, including face identity and facial expression processing, is a crucial component of socio-emotional abilities, characterizing humans as highest developed social beings. However, for these trait domains molecular genetic studies investigating gene-behavior associations based on well-founded phenotype definitions are still rare. We examined the relationship between 5-HTTLPR/rs25531 polymorphisms - related to serotonin-reuptake - and the ability to perceive and recognize faces and emotional expressions in human faces. For this aim we conducted structural equation modeling on data from 230 young adults, obtained by using a comprehensive, multivariate task battery with maximal effort tasks. By additionally modeling fluid intelligence and immediate and delayed memory factors, we aimed to address the discriminant relationships of the 5-HTTLPR/rs25531 polymorphisms with socio-emotional abilities. We found a robust association between the 5-HTTLPR/rs25531 polymorphism and facial emotion perception. Carriers of two long (L) alleles outperformed carriers of one or two S alleles. Weaker associations were present for face identity perception and memory for emotional facial expressions. There was no association between the 5-HTTLPR/rs25531 polymorphism and non-social abilities, demonstrating discriminant validity of the relationships. We discuss the implications and possible neural mechanisms underlying these novel findings. © 2016 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  18. Asymmetric synthesis of isoindolones by chiral cyclopentadienyl-rhodium(III)-catalyzed C-H functionalizations.

    PubMed

    Ye, Baihua; Cramer, Nicolai

    2014-07-21

    Directed Cp*Rh(III)-catalyzed carbon-hydrogen (C-H) bond functionalizations have evolved as a powerful strategy for the construction of heterocycles. Despite their high value, the development of related asymmetric reactions is largely lagging behind due to a limited availability of robust and tunable chiral cyclopentadienyl ligands. Rhodium complexes comprising a chiral Cp ligand with an atropchiral biaryl backbone enables an asymmetric synthesis of isoindolones from arylhydroxamates and weakly alkyl donor/acceptor diazo derivatives as one-carbon component under mild conditions. The complex guides the substrates with a high double facial selectivity yielding the chiral isoindolones in good yields and excellent enantioselectivities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  20. Neuroticism and facial emotion recognition in healthy adults.

    PubMed

    Andric, Sanja; Maric, Nadja P; Knezevic, Goran; Mihaljevic, Marina; Mirjanic, Tijana; Velthorst, Eva; van Os, Jim

    2016-04-01

    The aim of the present study was to examine whether healthy individuals with higher levels of neuroticism, a robust independent predictor of psychopathology, exhibit altered facial emotion recognition performance. Facial emotion recognition accuracy was investigated in 104 healthy adults using the Degraded Facial Affect Recognition Task (DFAR). Participants' degree of neuroticism was estimated using neuroticism scales extracted from the Eysenck Personality Questionnaire and the Revised NEO Personality Inventory. A significant negative correlation between the degree of neuroticism and the percentage of correct answers on DFAR was found only for happy facial expression (significant after applying Bonferroni correction). Altered sensitivity to the emotional context represents a useful and easy way to obtain cognitive phenotype that correlates strongly with inter-individual variations in neuroticism linked to stress vulnerability and subsequent psychopathology. Present findings could have implication in early intervention strategies and staging models in psychiatry. © 2015 Wiley Publishing Asia Pty Ltd.

  1. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    PubMed

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Role of facial attractiveness in patients with slight-to-borderline treatment need according to the Aesthetic Component of the Index of Orthodontic Treatment Need as judged by eye tracking.

    PubMed

    Johnson, Elizabeth K; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2017-02-01

    Previous eye-tracking research has demonstrated that laypersons view the range of dental attractiveness levels differently depending on facial attractiveness levels. How the borderline levels of dental attractiveness are viewed has not been evaluated in the context of facial attractiveness and compared with those with near-ideal esthetics or those in definite need of orthodontic treatment according to the Aesthetic Component of the Index of Orthodontic Treatment Need scale. Our objective was to determine the level of viewers' visual attention in its treatment need categories levels 3 to 7 for persons considered "attractive," "average," or "unattractive." Facial images of persons at 3 facial attractiveness levels were combined with 5 levels of dental attractiveness (dentitions representing Aesthetic Component of the Index of Orthodontic Treatment Need levels 3-7) using imaging software to form 15 composite images. Each image was viewed twice by 66 lay participants using eye tracking. Both the fixation density (number of fixations per facial area) and the fixation duration (length of time for each facial area) were quantified for each image viewed. Repeated-measures analysis of variance was used to determine how fixation density and duration varied among the 6 facial interest areas (chin, ear, eye, mouth, nose, and other). Viewers demonstrated excellent to good reliability among the 6 interest areas (intraviewer reliability, 0.70-0.96; interviewer reliability, 0.56-0.93). Between Aesthetic Component of the Index of Orthodontic Treatment Need levels 3 and 7, viewers of all facial attractiveness levels showed an increase in attention to the mouth. However, only with the attractive models were significant differences in fixation density and duration found between borderline levels with female viewers. Female viewers paid attention to different areas of the face than did male viewers. The importance of dental attractiveness is amplified in facially attractive female models compared with average and unattractive female models between near-ideal and borderline-severe dentally unattractive levels. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  3. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  4. Illuminant color estimation based on pigmentation separation from human skin color

    NASA Astrophysics Data System (ADS)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2015-03-01

    Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.

  5. The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression.

    PubMed

    Holmes, Amanda; Winston, Joel S; Eimer, Martin

    2005-10-01

    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information.

  6. Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery

    NASA Astrophysics Data System (ADS)

    Voormolen, Eduard H. J.; van Stralen, Marijn; Woerdeman, Peter A.; Pluim, Josien P. W.; Noordmans, Herke J.; Regli, Luca; Berkelbach van der Sprenkel, Jan W.; Viergever, Max A.

    2011-03-01

    Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40+/-0.20 mm (mean+/-standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy.

  7. Targeting specific facial variation for different identification tasks.

    PubMed

    Aeria, Gillian; Claes, Peter; Vandermeulen, Dirk; Clement, John Gerald

    2010-09-10

    A conceptual framework that allows faces to be studied and compared objectively with biological validity is presented. The framework is a logical extension of modern morphometrics and statistical shape analysis techniques. Three dimensional (3D) facial scans were collected from 255 healthy young adults. One scan depicted a smiling facial expression and another scan depicted a neutral expression. These facial scans were modelled in a Principal Component Analysis (PCA) space where Euclidean (ED) and Mahalanobis (MD) distances were used to form similarity measures. Within this PCA space, property pathways were calculated that expressed the direction of change in facial expression. Decomposition of distances into property-independent (D1) and dependent components (D2) along these pathways enabled the comparison of two faces in terms of the extent of a smiling expression. The performance of all distances was tested and compared in dual types of experiments: Classification tasks and a Recognition task. In the Classification tasks, individual facial scans were assigned to one or more population groups of smiling or neutral scans. The property-dependent (D2) component of both Euclidean and Mahalanobis distances performed best in the Classification task, by correctly assigning 99.8% of scans to the right population group. The recognition task tested if a scan of an individual depicting a smiling/neutral expression could be positively identified when shown a scan of the same person depicting a neutral/smiling expression. ED1 and MD1 performed best, and correctly identified 97.8% and 94.8% of individual scans respectively as belonging to the same person despite differences in facial expression. It was concluded that decomposed components are superior to straightforward distances in achieving positive identifications and presents a novel method for quantifying facial similarity. Additionally, although the undecomposed Mahalanobis distance often used in practice outperformed that of the Euclidean, it was the opposite result for the decomposed distances. Crown Copyright 2010. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  9. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  10. Hemispheric differences in recognizing upper and lower facial displays of emotion.

    PubMed

    Prodan, C I; Orbelo, D M; Testa, J A; Ross, E D

    2001-01-01

    To determine if there are hemispheric differences in processing upper versus lower facial displays of emotion. Recent evidence suggests that there are two broad classes of emotions with differential hemispheric lateralization. Primary emotions (e.g. anger, fear) and associated displays are innate, are recognized across all cultures, and are thought to be modulated by the right hemisphere. Social emotions (e.g., guilt, jealousy) and associated "display rules" are learned during early child development, vary across cultures, and are thought to be modulated by the left hemisphere. Display rules are used by persons to alter, suppress or enhance primary emotional displays for social purposes. During deceitful behaviors, a subject's true emotional state is often leaked through upper rather than lower facial displays, giving rise to facial blends of emotion. We hypothesized that upper facial displays are processed preferentially by the right hemisphere, as part of the primary emotional system, while lower facial displays are processed preferentially by the left hemisphere, as part of the social emotional system. 30 strongly right-handed adult volunteers were tested tachistoscopically by randomly flashing facial displays of emotion to the right and left visual fields. The stimuli were line drawings of facial blends with different emotions displayed on the upper versus lower face. The subjects were tested under two conditions: 1) without instructions and 2) with instructions to attend to the upper face. Without instructions, the subjects robustly identified the emotion displayed on the lower face, regardless of visual field presentation. With instructions to attend to the upper face, for the left visual field they robustly identified the emotion displayed on the upper face. For the right visual field, they continued to identify the emotion displayed on the lower face, but to a lesser degree. Our results support the hypothesis that hemispheric differences exist in the ability to process upper versus lower facial displays of emotion. Attention appears to enhance the ability to explore these hemispheric differences under experimental conditions. Our data also support the recent observation that the right hemisphere has a greater ability to recognize deceitful behaviors compared with the left hemisphere. This may be attributable to the different roles the hemispheres play in modulating social versus primary emotions and related behaviors.

  11. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  12. Objective grading of facial paralysis using Local Binary Patterns in video processing.

    PubMed

    He, Shu; Soraghan, John J; O'Reilly, Brian F

    2008-01-01

    This paper presents a novel framework for objective measurement of facial paralysis in biomedial videos. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the Local Binary Patterns (LBP) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of block schemes. A multi-resolution extension of uniform LBP is proposed to efficiently combine the micro-patterns and large-scale patterns into a feature vector, which increases the algorithmic robustness and reduces noise effects while still retaining computational simplicity. The symmetry of facial movements is measured by the Resistor-Average Distance (RAD) between LBP features extracted from the two sides of the face. Support Vector Machine (SVM) is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) Scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  13. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  14. Deficits in motor abilities and developmental fractionation of imitation performance in high-functioning autism spectrum disorders.

    PubMed

    Biscaldi, Monica; Rauh, Reinhold; Irion, Lisa; Jung, Nikolai H; Mall, Volker; Fleischhaker, Christian; Klein, Christoph

    2014-07-01

    The co-occurrence of motor and imitation disabilities often characterises the spectrum of deficits seen in patients with autism spectrum disorders (ASD). Whether these seemingly separate deficits are inter-related and whether, in particular, motor deficits contribute to the expression of imitation deficits is the topic of the present study and was investigated by comparing these deficits' cross-sectional developmental trajectories. To that end, different components of motor performance assessed in the Zurich Neuromotor Assessment and imitation abilities for facial movements and non-meaningful gestures were tested in 70 subjects (aged 6-29 years), including 36 patients with high-functioning ASD and 34 age-matched typically developed (TD) participants. The results show robust deficits in probands with ASD in timed motor performance and in the quality of movement, which are all independent of age, with one exception. Only diadochokinesis improves moderately with increasing age in ASD probands. Imitation of facial movements and of non-meaningful hand, finger, hand finger gestures not related to social context or tool use is also impaired in ASD subjects, but in contrast to motor performance this deficit overall improves with age. A general imitation factor, extracted from the highly inter-correlated imitation tests, is differentially correlated with components of neuromotor performance in ASD and TD participants. By developmentally fractionating developmentally stable motor deficits from developmentally dynamic imitation deficits, we infer that imitation deficits are primarily cognitive in nature.

  15. The neurosurgical treatment of neuropathic facial pain.

    PubMed

    Brown, Jeffrey A

    2014-04-01

    This article reviews the definition, etiology and evaluation, and medical and neurosurgical treatment of neuropathic facial pain. A neuropathic origin for facial pain should be considered when evaluating a patient for rhinologic surgery because of complaints of facial pain. Neuropathic facial pain is caused by vascular compression of the trigeminal nerve in the prepontine cistern and is characterized by an intermittent prickling or stabbing component or a constant burning, searing pain. Medical treatment consists of anticonvulsant medication. Neurosurgical treatment may require microvascular decompression of the trigeminal nerve. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The importance of skin color and facial structure in perceiving and remembering others: an electrophysiological study.

    PubMed

    Brebner, Joanne L; Krigolson, Olav; Handy, Todd C; Quadflieg, Susanne; Turk, David J

    2011-05-04

    The own-race bias (ORB) is a well-documented recognition advantage for own-race (OR) over cross-race (CR) faces, the origin of which remains unclear. In the current study, event-related potentials (ERPs) were recorded while Caucasian participants age-categorized Black and White faces which were digitally altered to display either a race congruent or incongruent facial structure. The results of a subsequent surprise memory test indicated that regardless of facial structure participants recognized White faces better than Black faces. Additional analyses revealed that temporally-early ERP components associated with face-specific perceptual processing (N170) and the individuation of facial exemplars (N250) were selectively sensitive to skin color. In addition, the N200 (a component that has been linked to increased attention and depth of encoding afforded to in-group and OR faces) was modulated by color and structure, and correlated with subsequent memory performance. However, the LPP component associated with the cognitive evaluation of perceptual input was influenced by racial differences in facial structure alone. These findings suggest that racial differences in skin color and facial structure are detected during the encoding of unfamiliar faces, and that the categorization of conspecifics as members of our social in-group on the basis of their skin color may be a determining factor in our ability to subsequently remember them. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Combined flaps based on the superficial temporal vascular system for reconstruction of facial defects.

    PubMed

    Zhou, Renpeng; Wang, Chen; Qian, Yunliang; Wang, Danru

    2015-09-01

    Facial defects are multicomponent deficiencies rather than simple soft-tissue defects. Based on different branches of the superficial temporal vascular system, various tissue components can be obtained to reconstruct facial defects individually. From January 2004 to December 2013, 31 patients underwent reconstruction of facial defects with composite flaps based on the superficial temporal vascular system. Twenty cases of nasal defects were repaired with skin and cartilage components, six cases of facial defects were treated with double island flaps of the skin and fascia, three patients underwent eyebrow and lower eyelid reconstruction with hairy and hairless flaps simultaneously, and two patients underwent soft-tissue repair with auricular combined flaps and cranial bone grafts. All flaps survived completely. Donor-site morbidity is minimal, closed primarily. Donor areas healed with acceptable cosmetic results. The final outcome was satisfactory. Combined flaps based on the superficial temporal vascular system are a useful and versatile option in facial soft-tissue reconstruction. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  19. Effects of a small talking facial image on autonomic activity: the moderating influence of dispositional BIS and BAS sensitivities and emotions.

    PubMed

    Ravaja, Niklas

    2004-01-01

    We examined the moderating influence of dispositional behavioral inhibition system and behavioral activation system (BAS) sensitivities, Negative Affect, and Positive Affect on the relationship between a small moving vs. static facial image and autonomic responses when viewing/listening to news messages read by a newscaster among 36 young adults. Autonomic parameters measured were respiratory sinus arrhythmia (RSA), low-frequency (LF) component of heart rate variability (HRV), electrodermal activity, and pulse transit time (PTT). The results showed that dispositional BAS sensitivity, particularly BAS Fun Seeking, and Negative Affect interacted with facial image motion in predicting autonomic nervous system activity. A moving facial image was related to lower RSA and LF component of HRV and shorter PTTs as compared to a static facial image among high BAS individuals. Even a small talking facial image may contribute to sustained attentional engagement among high BAS individuals, given that the BAS directs attention toward the positive cue and a moving social stimulus may act as a positive incentive for high BAS individuals.

  20. Men’s Facial Width-to-Height Ratio Predicts Aggression: A Meta-Analysis

    PubMed Central

    Haselhuhn, Michael P.; Ormiston, Margaret E.; Wong, Elaine M.

    2015-01-01

    Recent research has identified men’s facial width-to-height ratio (fWHR) as a reliable predictor of aggressive tendencies and behavior. Other research, however, has failed to replicate the fWHR-aggression relationship and has questioned whether previous findings are robust. In the current paper, we synthesize existing work by conducting a meta-analysis to estimate whether and how fWHR predicts aggression. Our results indicate a small, but significant, positive relationship between men’s fWHR and aggression. PMID:25849992

  1. Human Facial Shape and Size Heritability and Genetic Correlations.

    PubMed

    Cole, Joanne B; Manyama, Mange; Larson, Jacinda R; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Li, Mao; Mio, Washington; Klein, Ophir D; Santorico, Stephanie A; Hallgrímsson, Benedikt; Spritz, Richard A

    2017-02-01

    The human face is an array of variable physical features that together make each of us unique and distinguishable. Striking familial facial similarities underscore a genetic component, but little is known of the genes that underlie facial shape differences. Numerous studies have estimated facial shape heritability using various methods. Here, we used advanced three-dimensional imaging technology and quantitative human genetics analysis to estimate narrow-sense heritability, heritability explained by common genetic variation, and pairwise genetic correlations of 38 measures of facial shape and size in normal African Bantu children from Tanzania. Specifically, we fit a linear mixed model of genetic relatedness between close and distant relatives to jointly estimate variance components that correspond to heritability explained by genome-wide common genetic variation and variance explained by uncaptured genetic variation, the sum representing total narrow-sense heritability. Our significant estimates for narrow-sense heritability of specific facial traits range from 28 to 67%, with horizontal measures being slightly more heritable than vertical or depth measures. Furthermore, for over half of facial traits, >90% of narrow-sense heritability can be explained by common genetic variation. We also find high absolute genetic correlation between most traits, indicating large overlap in underlying genetic loci. Not surprisingly, traits measured in the same physical orientation (i.e., both horizontal or both vertical) have high positive genetic correlations, whereas traits in opposite orientations have high negative correlations. The complex genetic architecture of facial shape informs our understanding of the intricate relationships among different facial features as well as overall facial development. Copyright © 2017 by the Genetics Society of America.

  2. A System for Studying Facial Nerve Function in Rats through Simultaneous Bilateral Monitoring of Eyelid and Whisker Movements

    PubMed Central

    Heaton, James T.; Kowaleski, Jeffrey M.; Bermejo, Roberto; Zeigler, H. Philip; Ahlgren, David J.; Hadlock, Tessa A.

    2008-01-01

    The occurrence of inappropriate co-contraction of facially innervated muscles in humans (synkinesis) is a common sequela of facial nerve injury and recovery. We have developed a system for studying facial nerve function and synkinesis in restrained rats using non-contact opto-electronic techniques that enable simultaneous bilateral monitoring of eyelid and whisker movements. Whisking is monitored in high spatio-temporal resolution using laser micrometers, and eyelid movements are detected using infrared diode and phototransistor pairs that respond to the increased reflection when the eyelids cover the cornea. To validate the system, eight rats were tested with multiple five-minute sessions that included corneal air puffs to elicit blink and scented air flows to elicit robust whisking. Four rats then received unilateral facial nerve section and were tested at weeks 3–6. Whisking and eye blink behavior occurred both spontaneously and under stimulus control, with no detectable difference from published whisking data. Proximal facial nerve section caused an immediate ipsilateral loss of whisking and eye blink response, but some ocular closures emerged due to retractor bulbi muscle function. The independence observed between whisker and eyelid control indicates that this system may provide a powerful tool for identifying abnormal co-activation of facial zones resulting from aberrant axonal regeneration. PMID:18442856

  3. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    PubMed

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  4. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    PubMed

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-04-01

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.

  5. Facial averageness and genetic quality: Testing heritability, genetic correlation with attractiveness, and the paternal age effect.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2016-01-01

    Popular theory suggests that facial averageness is preferred in a partner for genetic benefits to offspring. However, whether facial averageness is associated with genetic quality is yet to be established. Here, we computed an objective measure of facial averageness for a large sample ( N = 1,823) of identical and nonidentical twins and their siblings to test two predictions from the theory that facial averageness reflects genetic quality. First, we use biometrical modelling to estimate the heritability of facial averageness, which is necessary if it reflects genetic quality. We also test for a genetic association between facial averageness and facial attractiveness. Second, we assess whether paternal age at conception (a proxy of mutation load) is associated with facial averageness and facial attractiveness. Our findings are mixed with respect to our hypotheses. While we found that facial averageness does have a genetic component, and a significant phenotypic correlation exists between facial averageness and attractiveness, we did not find a genetic correlation between facial averageness and attractiveness (therefore, we cannot say that the genes that affect facial averageness also affect facial attractiveness) and paternal age at conception was not negatively associated with facial averageness. These findings support some of the previously untested assumptions of the 'genetic benefits' account of facial averageness, but cast doubt on others.

  6. Are Happy Faces Attractive? The Roles of Early vs. Late Processing

    PubMed Central

    Sun, Delin; Chan, Chetwyn C. H.; Fan, Jintu; Wu, Yi; Lee, Tatia M. C.

    2015-01-01

    Facial attractiveness is closely related to romantic love. To understand if the neural underpinnings of perceived facial attractiveness and facial expression are similar constructs, we recorded neural signals using an event-related potential (ERP) methodology for 20 participants who were viewing faces with varied attractiveness and expressions. We found that attractiveness and expression were reflected by two early components, P2-lateral (P2l) and P2-medial (P2m), respectively; their interaction effect was reflected by LPP, a late component. The findings suggested that facial attractiveness and expression are first processed in parallel for discrimination between stimuli. After the initial processing, more attentional resources are allocated to the faces with the most positive or most negative valence in both the attractiveness and expression dimensions. The findings contribute to the theoretical model of face perception. PMID:26648885

  7. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  8. 3D face recognition under expressions, occlusions, and pose variations.

    PubMed

    Drira, Hassen; Ben Amor, Boulbaba; Srivastava, Anuj; Daoudi, Mohamed; Slama, Rim

    2013-09-01

    We propose a novel geometric framework for analyzing 3D faces, with the specific goals of comparing, matching, and averaging their shapes. Here we represent facial surfaces by radial curves emanating from the nose tips and use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of full facial surfaces. This representation, along with the elastic Riemannian metric, seems natural for measuring facial deformations and is robust to challenges such as large facial expressions (especially those with open mouths), large pose variations, missing parts, and partial occlusions due to glasses, hair, and so on. This framework is shown to be promising from both--empirical and theoretical--perspectives. In terms of the empirical evaluation, our results match or improve upon the state-of-the-art methods on three prominent databases: FRGCv2, GavabDB, and Bosphorus, each posing a different type of challenge. From a theoretical perspective, this framework allows for formal statistical inferences, such as the estimation of missing facial parts using PCA on tangent spaces and computing average shapes.

  9. Facial disability index (FDI): Adaptation to Spanish, reliability and validity

    PubMed Central

    Gonzalez-Cardero, Eduardo; Cayuela, Aurelio; Acosta-Feria, Manuel; Gutierrez-Perez, Jose-Luis

    2012-01-01

    Objectives: To adapt to Spanish the facial disability index (FDI) described by VanSwearingen and Brach in 1995 and to assess its reliability and validity in patients with facial nerve paresis after parotidectomy. Study Design: The present study was conducted in two different stages: a) cross-cultural adaptation of the questionnaire and b) cross-sectional study of a control group of 79 Spanish-speaking patients who suffered facial paresis after superficial parotidectomy with facial nerve preservation. The cross-cultural adaptation process comprised the following stages: (I) initial translation, (II) synthesis of the translated document, (III) retro-translation, (IV) review by a board of experts, (V) pilot study of the pre-final draft and (VI) analysis of the pilot study and final draft. Results: The reliability and internal consistency of every one of the rating scales included in the FDI (Cronbach’s alpha coefficient) was 0.83 for the complete scale and 0.77 and 0.82 for the physical and the social well-being subscales. The analysis of the factorial validity of the main components of the adapted FDI yielded similar results to the original questionnaire. Bivariate correlations between FDI and House-Brackmann scale were positive. The variance percentage was calculated for all FDI components. Conclusions: The FDI questionnaire is a specific instrument for assessing facial neuromuscular dysfunction which becomes a useful tool in order to determine quality of life in patients with facial nerve paralysis. Spanish adapted FDI is equivalent to the original questionnaire and shows similar reliability and validity. The proven reproducibi-lity, reliability and validity of this questionnaire make it a useful additional tool for evaluating the impact of facial nerve paralysis in Spanish-speaking patients. Key words:Parotidectomy, facial nerve paralysis, facial disability. PMID:22926474

  10. Supramolecular Disassembly of Facially Amphiphilic Dendrimer Assemblies in Response to Physical, Chemical, and Biological Stimuli

    PubMed Central

    2015-01-01

    Conspectus Supramolecular assemblies formed from spontaneous self-assembly of amphiphilic macromolecules are explored as biomimetic architectures and for applications in areas such as sensing, drug delivery, and diagnostics. Macromolecular assemblies are usually preferred, compared with their simpler small molecule counterparts, due to their low critical aggregate concentrations (CAC) and high thermodynamic stability. This Account focuses on the structural and functional aspects of assemblies formed from dendrimers, specifically facially amphiphilic dendrons that form micelle or inverse micelle type supramolecular assemblies depending on the nature of the solvent medium. The micelle type assemblies formed from facially amphiphilic dendrons sequester hydrophobic guest molecules in their interiors. The stability of these assemblies is dependent on the relative compatibility of the hydrophilic and hydrophobic functionalities with water, often referred to as hydrophilic–lipophilic balance (HLB). Disruption of the HLB, using an external stimulus, could lead to disassembly of the aggregates, which can then be utilized to cause an actuation event, such as guest molecule release. Studying these possibilities has led to (i) a robust and general strategy for stimulus-induced disassembly and molecular release and (ii) the introduction of a new approach to protein-responsive supramolecular disassembly. The latter strategy provides a particularly novel avenue for impacting biomedical applications. Most of the stimuli-sensitive supramolecular assemblies have been designed to be responsive to factors such pH, temperature, and redox conditions. The reason for this interest stems from the fact that certain disease microenvironments have aberrations in these factors. However, these variations are the secondary imbalances in biology. Imbalances in protein activity are the primary reasons for most, if not all, human pathology. There have been no robust strategies in stimulus-responsive assemblies that respond to these variations. The facially amphiphilic dendrimers provide a unique opportunity to explore this possibility. Similarly, the propensity of these molecules to form inverse micelles in apolar solvents and thus bind polar guest molecules, combined with the fact that these assemblies do not thermodynamically equilibrate in biphasic mixtures, was used to predictably simplify peptide mixtures. The structure–property relationships developed from these studies have led to a selective and highly sensitive detection of peptides in complex mixtures. Selectivity in peptide extraction was achieved using charge complementarity between the peptides and the hydrophilic components present in inverse micellar interiors. These findings will have implications in areas such as proteomics and biomarker detection. PMID:24937682

  11. Assessment of facial golden proportions among young Japanese women.

    PubMed

    Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C

    2009-08-01

    Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).

  12. Intact mirror mechanisms for automatic facial emotions in children and adolescents with autism spectrum disorder.

    PubMed

    Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin

    2017-02-01

    It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  13. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    PubMed

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  14. Sex differences in facial emotion recognition across varying expression intensity levels from videos

    PubMed Central

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674

  15. Toward automated face detection in thermal and polarimetric thermal imagery

    NASA Astrophysics Data System (ADS)

    Gordon, Christopher; Acosta, Mark; Short, Nathan; Hu, Shuowen; Chan, Alex L.

    2016-05-01

    Visible spectrum face detection algorithms perform pretty reliably under controlled lighting conditions. However, variations in illumination and application of cosmetics can distort the features used by common face detectors, thereby degrade their detection performance. Thermal and polarimetric thermal facial imaging are relatively invariant to illumination and robust to the application of makeup, due to their measurement of emitted radiation instead of reflected light signals. The objective of this work is to evaluate a government off-the-shelf wavelet based naïve-Bayes face detection algorithm and a commercial off-the-shelf Viola-Jones cascade face detection algorithm on face imagery acquired in different spectral bands. New classifiers were trained using the Viola-Jones cascade object detection framework with preprocessed facial imagery. Preprocessing using Difference of Gaussians (DoG) filtering reduces the modality gap between facial signatures across the different spectral bands, thus enabling more correlated histogram of oriented gradients (HOG) features to be extracted from the preprocessed thermal and visible face images. Since the availability of training data is much more limited in the thermal spectrum than in the visible spectrum, it is not feasible to train a robust multi-modal face detector using thermal imagery alone. A large training dataset was constituted with DoG filtered visible and thermal imagery, which was subsequently used to generate a custom trained Viola-Jones detector. A 40% increase in face detection rate was achieved on a testing dataset, as compared to the performance of a pre-trained/baseline face detector. Insights gained in this research are valuable in the development of more robust multi-modal face detectors.

  16. Neural correlates of mirth and laughter: a direct electrical cortical stimulation study.

    PubMed

    Yamao, Yukihiro; Matsumoto, Riki; Kunieda, Takeharu; Shibata, Sumiya; Shimotake, Akihiro; Kikuchi, Takayuki; Satow, Takeshi; Mikuni, Nobuhiro; Fukuyama, Hidenao; Ikeda, Akio; Miyamoto, Susumu

    2015-05-01

    Laughter consists of both motor and emotional aspects. The emotional component, known as mirth, is usually associated with the motor component, namely, bilateral facial movements. Previous electrical cortical stimulation (ES) studies revealed that mirth was associated with the basal temporal cortex, inferior frontal cortex, and medial frontal cortex. Functional neuroimaging implicated a role for the left inferior frontal and bilateral temporal cortices in humor processing. However, the neural origins and pathways linking mirth with facial movements are still unclear. We hereby report two cases with temporal lobe epilepsy undergoing subdural electrode implantation in whom ES of the left basal temporal cortex elicited both mirth and laughter-related facial muscle movements. In one case with normal hippocampus, high-frequency ES consistently caused contralateral facial movement, followed by bilateral facial movements with mirth. In contrast, in another case with hippocampal sclerosis (HS), ES elicited only mirth at low intensity and short duration, and eventually laughter at higher intensity and longer duration. In both cases, the basal temporal language area (BTLA) was located within or adjacent to the cortex where ES produced mirth. In conclusion, the present direct ES study demonstrated that 1) mirth had a close relationship with language function, 2) intact mesial temporal structures were actively engaged in the beginning of facial movements associated with mirth, and 3) these emotion-related facial movements had contralateral dominance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study

    PubMed Central

    Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-01-01

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211

  18. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.

    PubMed

    Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-10-25

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.

  19. Linear measurements of the neurocranium are better indicators of population differences than those of the facial skeleton: comparative study of 1,961 skulls.

    PubMed

    Holló, Gábor; Szathmáry, László; Marcsik, Antónia; Barta, Zoltán

    2010-02-01

    The aim of this study is to individualize potential differences between two cranial regions used to differentiate human populations. We compared the neurocranium and the facial skeleton using skulls from the Great Hungarian Plain. The skulls date to the 1st-11th centuries, a long space of time that encompasses seven archaeological periods. We analyzed six neurocranial and seven facial measurements. The reduction of the number of variables was carried out using principal components analysis. Linear mixed-effects models were fitted to the principal components of each archaeological period, and then the models were compared using multiple pairwise tests. The neurocranium showed significant differences in seven cases between nonsubsequent periods and in one case, between two subsequent populations. For the facial skeleton, no significant results were found. Our results, which are also compared to previous craniofacial heritability estimates, suggest that the neurocranium is a more conservative region and that population differences can be pointed out better in the neurocranium than in the facial skeleton.

  20. Deficits in facial affect recognition among antisocial populations: a meta-analysis.

    PubMed

    Marsh, Abigail A; Blair, R J R

    2008-01-01

    Individuals with disorders marked by antisocial behavior frequently show deficits in recognizing displays of facial affect. Antisociality may be associated with specific deficits in identifying fearful expressions, which would implicate dysfunction in neural structures that subserve fearful expression processing. A meta-analysis of 20 studies was conducted to assess: (a) if antisocial populations show any consistent deficits in recognizing six emotional expressions; (b) beyond any generalized impairment, whether specific fear recognition deficits are apparent; and (c) if deficits in fear recognition are a function of task difficulty. Results show a robust link between antisocial behavior and specific deficits in recognizing fearful expressions. This impairment cannot be attributed solely to task difficulty. These results suggest dysfunction among antisocial individuals in specified neural substrates, namely the amygdala, involved in processing fearful facial affect.

  1. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  2. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  3. Novel dynamic Bayesian networks for facial action element recognition and understanding

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  4. Detection of Terrorist Preparations by an Artificial Intelligence Expert System Employing Fuzzy Signal Detection Theory

    DTIC Science & Technology

    2004-10-25

    FUSEDOT does not require facial recognition , or video surveillance of public areas, both of which are apparently a component of TIA ([26], pp...does not use fuzzy signal detection. Involves facial recognition and video surveillance of public areas. Involves monitoring the content of voice...fuzzy signal detection, which TIA does not. Second, FUSEDOT would be easier to develop, because it does not require the development of facial

  5. Values of a Patient and Observer Scar Assessment Scale to Evaluate the Facial Skin Graft Scar.

    PubMed

    Chae, Jin Kyung; Kim, Jeong Hee; Kim, Eun Jung; Park, Kun

    2016-10-01

    The patient and observer scar assessment scale (POSAS) recently emerged as a promising method, reflecting both observer's and patient's opinions in evaluating scar. This tool was shown to be consistent and reliable in burn scar assessment, but it has not been tested in the setting of skin graft scar in skin cancer patients. To evaluate facial skin graft scar applied to POSAS and to compare with objective scar assessment tools. Twenty three patients, who diagnosed with facial cutaneous malignancy and transplanted skin after Mohs micrographic surgery, were recruited. Observer assessment was performed by three independent rates using the observer component of the POSAS and Vancouver scar scale (VSS). Patient self-assessment was performed using the patient component of the POSAS. To quantify scar color and scar thickness more objectively, spectrophotometer and ultrasonography was applied. Inter-observer reliability was substantial with both VSS and the observer component of the POSAS (average measure intraclass coefficient correlation, 0.76 and 0.80, respectively). The observer component consistently showed significant correlations with patients' ratings for the parameters of the POSAS (all p -values<0.05). The correlation between subjective assessment using POSAS and objective assessment using spectrophotometer and ultrasonography showed low relationship. In facial skin graft scar assessment in skin cancer patients, the POSAS showed acceptable inter-observer reliability. This tool was more comprehensive and had higher correlation with patient's opinion.

  6. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  7. Total Face, Eyelids, Ears, Scalp, and Skeletal Subunit Transplant Research Procurement: A Translational Simulation Model.

    PubMed

    Sosin, Michael; Ceradini, Daniel J; Hazen, Alexes; Sweeney, Nicole G; Brecht, Lawrence E; Levine, Jamie P; Staffenberg, David A; Saadeh, Pierre B; Bernstein, G Leslie; Rodriguez, Eduardo D

    2016-05-01

    Cadaveric face transplant models are routinely used for technical allograft design, perfusion assessment, and transplant simulation but are associated with substantial limitations. The purpose of this study was to describe the experience of implementing a translational donor research facial procurement and solid organ allograft recovery model. Institutional review board approval was obtained, and a 49-year-old, brain-dead donor was identified for facial vascularized composite allograft research procurement. The family generously consented to donation of solid organs and the total face, eyelids, ears, scalp, and skeletal subunit allograft. The successful sequence of computed tomographic scanning, fabrication and postprocessing of patient-specific cutting guides, tracheostomy placement, preoperative fluorescent angiography, silicone mask facial impression, donor facial allograft recovery, postprocurement fluorescent angiography, and successful recovery of kidneys and liver occurred without any donor instability. Preservation of the bilateral external carotid arteries, facial arteries, occipital arteries, and bilateral thyrolinguofacial and internal jugular veins provided reliable and robust perfusion to the entirety of the allograft. Total time of facial procurement was 10 hours 57 minutes. Essential to clinical face transplant outcomes is the preparedness of the institution, multidisciplinary face transplant team, organ procurement organization, and solid organ transplant colleagues. A translational facial research procurement and solid organ recovery model serves as an educational experience to modify processes and address procedural, anatomical, and logistical concerns for institutions developing a clinical face transplantation program. This methodical approach best simulates the stressors and challenges that can be expected during clinical face transplantation. Therapeutic, V.

  8. Biometric identification based on novel frequency domain facial asymmetry measures

    NASA Astrophysics Data System (ADS)

    Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-03-01

    In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.

  9. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  10. Variation in the cranial base orientation and facial skeleton in dry skulls sampled from three major populations.

    PubMed

    Kuroe, Kazuto; Rosas, Antonio; Molleson, Theya

    2004-04-01

    The aim of this study was to analyse the effects of cranial base orientation on the morphology of the craniofacial system in human populations. Three geographically distant populations from Europe (72), Africa (48) and Asia (24) were chosen. Five angular and two linear variables from the cranial base component and six angular and six linear variables from the facial component based on two reference lines of the vertical posterior maxillary and Frankfort horizontal planes were measured. The European sample presented dolichofacial individuals with a larger face height and a smaller face depth derived from a raised cranial base and facial cranium orientation which tended to be similar to the Asian sample. The African sample presented brachyfacial individuals with a reduced face height and a larger face depth as a result of a lowered cranial base and facial cranium orientation. The Asian sample presented dolichofacial individuals with a larger face height and depth due to a raised cranial base and facial cranium orientation. The findings of this study suggest that cranial base orientation and posterior cranial base length appear to be valid discriminating factors between different human populations.

  11. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  12. Exploring the Role of Spatial Frequency Information during Neural Emotion Processing in Human Infants.

    PubMed

    Jessen, Sarah; Grossmann, Tobias

    2017-01-01

    Enhanced attention to fear expressions in adults is primarily driven by information from low as opposed to high spatial frequencies contained in faces. However, little is known about the role of spatial frequency information in emotion processing during infancy. In the present study, we examined the role of low compared to high spatial frequencies in the processing of happy and fearful facial expressions by using filtered face stimuli and measuring event-related brain potentials (ERPs) in 7-month-old infants ( N = 26). Our results revealed that infants' brains discriminated between emotional facial expressions containing high but not between expressions containing low spatial frequencies. Specifically, happy faces containing high spatial frequencies elicited a smaller Nc amplitude than fearful faces containing high spatial frequencies and happy and fearful faces containing low spatial frequencies. Our results demonstrate that already in infancy spatial frequency content influences the processing of facial emotions. Furthermore, we observed that fearful facial expressions elicited a comparable Nc response for high and low spatial frequencies, suggesting a robust detection of fearful faces irrespective of spatial frequency content, whereas the detection of happy facial expressions was contingent upon frequency content. In summary, these data provide new insights into the neural processing of facial emotions in early development by highlighting the differential role played by spatial frequencies in the detection of fear and happiness.

  13. Clinical outcomes of facial transplantation: a review.

    PubMed

    Shanmugarajah, Kumaran; Hettiaratchy, Shehan; Clarke, Alex; Butler, Peter E M

    2011-01-01

    A total of 18 composite tissue allotransplants of the face have currently been reported. Prior to the start of the face transplant programme, there had been intense debate over the risks and benefits of performing this experimental surgery. This review examines the surgical, functional and aesthetic, immunological and psychological outcomes of facial transplantation thus far, based on the predicted risks outlined in early publications from teams around the world. The initial experience has demonstrated that facial transplantation is surgically feasible. Functional and aesthetic outcomes have been very encouraging with good motor and sensory recovery and improvements to important facial functions observed. Episodes of acute rejection have been common, as predicted, but easily controlled with increases in systemic immunosuppression. Psychological improvements have been remarkable and have resulted in the reintegration of patients into the outside world, social networks and even the workplace. Complications of immunosuppression and patient mortality have been observed in the initial series. These have highlighted rigorous patient selection as the key predictor of success. The overall early outcomes of the face transplant programme have been generally more positive than many predicted. This initial success is testament to the robust approach of teams. Dissemination of outcomes and ongoing refinement of the process may allow facial transplantation to eventually become a first-line reconstructive option for those with extensive facial disfigurements. Copyright © 2011 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  14. Hybrid generative-discriminative approach to age-invariant face recognition

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad; Shafique, Tamoor

    2018-03-01

    Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.

  15. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    PubMed Central

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282

  16. Effects of Objective 3-Dimensional Measures of Facial Shape and Symmetry on Perceptions of Facial Attractiveness.

    PubMed

    Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M

    2017-09-01

    Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  17. The facial nerve: anatomy and associated disorders for oral health professionals.

    PubMed

    Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir

    2018-04-01

    The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.

  18. Characterization of the Tissue and Stromal Cell Components of Micro-Superficial Enhanced Fluid Fat Injection (Micro-SEFFI) for Facial Aging Treatment.

    PubMed

    Rossi, Martina; Roda, Barbara; Zia, Silvia; Vigliotta, Ilaria; Zannini, Chiara; Alviano, Francesco; Bonsi, Laura; Zattoni, Andrea; Reschiglian, Pierluigi; Gennai, Alessandro

    2018-06-14

    New microfat preparations provide material suitable for use as a regenerative filler for different facial areas. To support the development of new robust techniques for regenerative purposes, the cellular content of the sample should be considered. To evaluate the stromal vascular fraction (SVF) cell components of micro-superficial enhanced fluid fat injection (SEFFI) samples via a technique to harvest re-injectable tissue with minimum manipulation. The results were compared to those obtained from SEFFI samples. Microscopy analysis was performed to visualize the tissue structure. Micro-SEFFI samples were also fractionated using Celector ®, an innovative non-invasive separation technique, to provide an initial evaluation of sample fluidity and composition. SVFs obtained from SEFFI and micro-SEFFI were studied. Adipose stromal cells (ASCs) were isolated and characterized by proliferation and differentiation capacity assays. Microscopic and quality analyses of micro-SEFFI samples by Celector® confirmed the high fluidity and sample cellular composition in terms of red blood cell contamination, the presence of cell aggregates and extracellular matrix fragments. ASCs were isolated from adipose tissue harvested using SEFFI and micro-SEFFI systems. These cells were demonstrated to have a good proliferation rate and differentiation potential towards mesenchymal lineages. Despite the small sizes and low cellularity observed in micro-SEFFI-derived tissue, we were able to isolate stem cells. This result partially explains the regenerative potential of autologous micro-SEFFI tissue grafts. In addition, using this novel Celector® technology, tissues used for aging treatment were characterized analytically, and the adipose tissue composition was evaluated with no need for extra sample processing.

  19. The not face: A grammaticalization of facial expressions of emotion.

    PubMed

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. The Not Face: A grammaticalization of facial expressions of emotion

    PubMed Central

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  1. Overview of Facial Plastic Surgery and Current Developments

    PubMed Central

    Chuang, Jessica; Barnes, Christian; Wong, Brian J. F.

    2016-01-01

    Facial plastic surgery is a multidisciplinary specialty largely driven by otolaryngology but includes oral maxillary surgery, dermatology, ophthalmology, and plastic surgery. It encompasses both reconstructive and cosmetic components. The scope of practice for facial plastic surgeons in the United States may include rhinoplasty, browlifts, blepharoplasty, facelifts, microvascular reconstruction of the head and neck, craniomaxillofacial trauma reconstruction, and correction of defects in the face after skin cancer resection. Facial plastic surgery also encompasses the use of injectable fillers, neural modulators (e.g., BOTOX Cosmetic, Allergan Pharmaceuticals, Westport, Ireland), lasers, and other devices aimed at rejuvenating skin. Facial plastic surgery is a constantly evolving field with continuing innovative advances in surgical techniques and cosmetic adjunctive technologies. This article aims to give an overview of the various procedures that encompass the field of facial plastic surgery and to highlight the recent advances and trends in procedures and surgical techniques. PMID:28824978

  2. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    PubMed

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Peripheral facial nerve lesions induce changes in the firing properties of primary motor cortex layer 5 pyramidal cells.

    PubMed

    Múnera, A; Cuestas, D M; Troncoso, J

    2012-10-25

    Facial nerve lesions elicit long-lasting changes in vibrissal primary motor cortex (M1) muscular representation in rodents. Reorganization of cortical representation has been attributed to potentiation of preexisting horizontal connections coming from neighboring muscle representation. However, changes in layer 5 pyramidal neuron activity induced by facial nerve lesion have not yet been explored. To do so, the effect of irreversible facial nerve injury on electrophysiological properties of layer 5 pyramidal neurons was characterized. Twenty-four adult male Wistar rats were randomly subjected to two experimental treatments: either surgical transection of mandibular and buccal branches of the facial nerve (n=18) or sham surgery (n=6). Unitary and population activity of vibrissal M1 layer 5 pyramidal neurons recorded in vivo under general anesthesia was compared between sham-operated and facial nerve-injured animals. Injured animals were allowed either one (n=6), three (n=6), or five (n=6) weeks recovery before recording in order to characterize the evolution of changes in electrophysiological activity. As compared to control, facial nerve-injured animals displayed the following sustained and significant changes in spontaneous activity: increased basal firing frequency, decreased spike-associated local field oscillation amplitude, and decreased spontaneous theta burst firing frequency. Significant changes in evoked-activity with whisker pad stimulation included: increased short latency population spike amplitude, decreased long latency population oscillations amplitude and frequency, and decreased peak frequency during evoked single-unit burst firing. Taken together, such changes demonstrate that peripheral facial nerve lesions induce robust and sustained changes of layer 5 pyramidal neurons in vibrissal motor cortex. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Effects of facial color on the subliminal processing of fearful faces.

    PubMed

    Nakajima, K; Minami, T; Nakauchi, S

    2015-12-03

    Recent studies have suggested that both configural information, such as face shape, and surface information is important for face perception. In particular, facial color is sufficiently suggestive of emotional states, as in the phrases: "flushed with anger" and "pale with fear." However, few studies have examined the relationship between facial color and emotional expression. On the other hand, event-related potential (ERP) studies have shown that emotional expressions, such as fear, are processed unconsciously. In this study, we examined how facial color modulated the supraliminal and subliminal processing of fearful faces. We recorded electroencephalograms while participants performed a facial emotion identification task involving masked target faces exhibiting facial expressions (fearful or neutral) and colors (natural or bluish). The results indicated that there was a significant interaction between facial expression and color for the latency of the N170 component. Subsequent analyses revealed that the bluish-colored faces increased the latency effect of facial expressions compared to the natural-colored faces, indicating that the bluish color modulated the processing of fearful expressions. We conclude that the unconscious processing of fearful faces is affected by facial color. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Principal component analysis of three-dimensional face shape: Identifying shape features that change with age.

    PubMed

    Kurosumi, M; Mizukoshi, K

    2018-05-01

    The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Teaching Emotion Recognition Skills to Children with Autism

    ERIC Educational Resources Information Center

    Ryan, Christian; Charragain, Caitriona Ni

    2010-01-01

    Autism is associated with difficulty interacting with others and an impaired ability to recognize facial expressions of emotion. Previous teaching programmes have not addressed weak central coherence. Emotion recognition training focused on components of facial expressions. The training was administered in small groups ranging from 4 to 7…

  7. ECTODERMAL WNT/β-CATENIN SIGNALING SHAPES THE MOUSE FACE

    PubMed Central

    Reid, Bethany S.; Yang, Hui; Melvin, Vida Senkus; Taketo, Makoto M.; Williams, Trevor

    2010-01-01

    The canonical Wnt/β-catenin pathway is an essential component of multiple developmental processes. To investigate the role of this pathway in the ectoderm during facial morphogenesis, we generated conditional β-catenin mouse mutants using a novel ectoderm-specific Cre recombinase transgenic line. Our results demonstrate that ablating or stabilizing β-catenin in the embryonic ectoderm causes dramatic changes in facial morphology. There are accompanying alterations in the expression of Fgf8 and Shh, key molecules that establish a signaling center critical for facial patterning, the frontonasal ectodermal zone (FEZ). These data indicate that Wnt/β-catenin signaling within the ectoderm is critical for facial development and further suggest that this pathway is an important mechanism for generating the diverse facial shapes of vertebrates during evolution. PMID:21087601

  8. Values of a Patient and Observer Scar Assessment Scale to Evaluate the Facial Skin Graft Scar

    PubMed Central

    Chae, Jin Kyung; Kim, Eun Jung; Park, Kun

    2016-01-01

    Background The patient and observer scar assessment scale (POSAS) recently emerged as a promising method, reflecting both observer's and patient's opinions in evaluating scar. This tool was shown to be consistent and reliable in burn scar assessment, but it has not been tested in the setting of skin graft scar in skin cancer patients. Objective To evaluate facial skin graft scar applied to POSAS and to compare with objective scar assessment tools. Methods Twenty three patients, who diagnosed with facial cutaneous malignancy and transplanted skin after Mohs micrographic surgery, were recruited. Observer assessment was performed by three independent rates using the observer component of the POSAS and Vancouver scar scale (VSS). Patient self-assessment was performed using the patient component of the POSAS. To quantify scar color and scar thickness more objectively, spectrophotometer and ultrasonography was applied. Results Inter-observer reliability was substantial with both VSS and the observer component of the POSAS (average measure intraclass coefficient correlation, 0.76 and 0.80, respectively). The observer component consistently showed significant correlations with patients' ratings for the parameters of the POSAS (all p-values<0.05). The correlation between subjective assessment using POSAS and objective assessment using spectrophotometer and ultrasonography showed low relationship. Conclusion In facial skin graft scar assessment in skin cancer patients, the POSAS showed acceptable inter-observer reliability. This tool was more comprehensive and had higher correlation with patient's opinion. PMID:27746642

  9. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.

    PubMed

    Balconi, Michela; Canavesio, Ylenia

    2016-01-01

    The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.

  10. The effect of motorcycle helmet type, components and fixation status on facial injury in Klang Valley, Malaysia: a case control study

    PubMed Central

    2014-01-01

    Background The effectiveness of helmets in reducing the risk of severe head injury in motorcyclists who were involved in a crash is well established. There is limited evidence however, regarding the extent to which helmets protect riders from facial injuries. The objective of this study was to determine the effect of helmet type, components and fixation status on the risk of facial injuries among Malaysian motorcyclists. Method 755 injured motorcyclists were recruited over a 12-month period in 2010–2011 in southern Klang Valley, Malaysia in this case control study. Of the 755 injured motorcyclists, 391participants (51.8%) sustained facial injuries (cases) while 364 (48.2%) participants were without facial injury (control). The outcomes of interest were facial injury and location of facial injury (i.e. upper, middle and lower face injuries). A binary logistic regression was conducted to examine the association between helmet characteristics and the outcomes, taking into account potential confounders such as age, riding position, alcohol and illicit substance use, type of colliding vehicle and type of collision. Helmet fixation was defined as the position of the helmet during the crash whether it was still secured on the head or had been dislodged. Results Helmet fixation was shown to have a greater effect on facial injury outcome than helmet type. Increased odds of adverse outcome was observed for the non-fixed helmet compared to the fixed helmet with adjusted odds ratio (AOR) = 2.10 (95% CI 1.41- 3.13) for facial injury; AOR = 6.64 (95% CI 3.71-11.91) for upper face injury; AOR = 5.36 (95% CI 3.05-9.44) for middle face injury; and AOR = 2.00 (95% CI 1.22-3.26) for lower face injury. Motorcyclists with visor damage were shown with AOR = 5.48 (95% CI 1.46-20.57) to have facial injuries compared to those with an undamaged visor. Conclusions A helmet of any type that is properly worn and remains fixed on the head throughout a crash will provide some form of protection against facial injury. Visor damage is a significant contributing factor for facial injury. These findings are discussed with reference to implications for policy and initiatives addressing helmet use and wearing behaviors. PMID:25086638

  11. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  12. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  13. Comparison of different methods for gender estimation from face image of various poses

    NASA Astrophysics Data System (ADS)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  14. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  15. Positive and negative symptom scores are correlated with activation in different brain regions during facial emotion perception in schizophrenia patients: a voxel-based sLORETA source activity study.

    PubMed

    Kim, Do-Won; Kim, Han-Sung; Lee, Seung-Hwan; Im, Chang-Hwan

    2013-12-01

    Schizophrenia is one of the most devastating of all mental illnesses, and has dimensional characteristics that include both positive and negative symptoms. One problem reported in schizophrenia patients is that they tend to show deficits in face emotion processing, on which negative symptoms are thought to have stronger influence. In this study, four event-related potential (ERP) components (P100, N170, N250, and P300) and their source activities were analyzed using EEG data acquired from 23 schizophrenia patients while they were presented with facial emotion picture stimuli. Correlations between positive and negative syndrome scale (PANSS) scores and source activations during facial emotion processing were calculated to identify the brain areas affected by symptom scores. Our analysis demonstrates that PANSS positive scores are negatively correlated with major areas of the left temporal lobule for early ERP components (P100, N170) and with the right middle frontal lobule for a later component (N250), which indicates that positive symptoms affect both early face processing and facial emotion processing. On the other hand, PANSS negative scores are negatively correlated with several clustered regions, including the left fusiform gyrus (at P100), most of which are not overlapped with regions showing correlations with PANSS positive scores. Our results suggest that positive and negative symptoms affect independent brain regions during facial emotion processing, which may help to explain the heterogeneous characteristics of schizophrenia. © 2013 Elsevier B.V. All rights reserved.

  16. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.

  17. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  18. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    ERIC Educational Resources Information Center

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  19. Effects of task demands on the early neural processing of fearful and happy facial expressions

    PubMed Central

    Itier, Roxane J.; Neath-Tavares, Karly N.

    2017-01-01

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200–350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150–350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. PMID:28315309

  20. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  1. The fractal characteristic of facial anthropometric data for developing PCA fit test panels for youth born in central China.

    PubMed

    Yang, Lei; Wei, Ran; Shen, Henggen

    2017-01-01

    New principal component analysis (PCA) respirator fit test panels had been developed for current American and Chinese civilian workers based on anthropometric surveys. The PCA panels used the first two principal components (PCs) obtained from a set of 10 facial dimensions. Although the PCA panels for American and Chinese subjects adopted the bivairate framework with two PCs, the number of the PCs retained in the PCA analysis was different between Chinese subjects and Americans. For the Chinese youth group, the third PC should be retained in the PCA analysis for developing new fit test panels. In this article, an additional number label (ANL) is used to explain the third PC in PCA analysis when the first two PCs are used to construct the PCA half-facepiece respirator fit test panel for Chinese group. The three-dimensional box-counting method is proposed to estimate the ANLs by calculating fractal dimensions of the facial anthropometric data of the Chinese youth. The linear regression coefficients of scale-free range R 2 are all over 0.960, which demonstrates that the facial anthropometric data of the Chinese youth has fractal characteristic. The youth subjects born in Henan province has an ANL of 2.002, which is lower than the composite facial anthropometric data of Chinese subjects born in many provinces. Hence, Henan youth subjects have the self-similar facial anthropometric characteristic and should use the particular ANL (2.002) as the important tool along with using the PCA panel. The ANL method proposed in this article not only provides a new methodology in quantifying the characteristics of facial anthropometric dimensions for any ethnic/racial group, but also extends the scope of PCA panel studies to higher dimensions.

  2. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  3. A neurophysiological study of facial numbness in multiple sclerosis: Integration with clinical data and imaging findings.

    PubMed

    Koutsis, Georgios; Kokotis, Panagiotis; Papagianni, Aikaterini E; Evangelopoulos, Maria-Eleftheria; Kilidireas, Constantinos; Karandreas, Nikolaos

    2016-09-01

    To integrate neurophysiological findings with clinical and imaging data in a consecutive series of multiple sclerosis (MS) patients developing facial numbness during the course of an MS attack. Nine consecutive patients with MS and recent-onset facial numbness were studied clinically, imaged with routine MRI, and assessed neurophysiologically with trigeminal somatosensory evoked potential (TSEP), blink reflex (BR), masseter reflex (MR), facial nerve conduction, facial muscle and masseter EMG studies. All patients had unilateral facial hypoesthesia on examination and lesions in the ipsilateral pontine tegmentum on MRI. All patients had abnormal TSEPs upon stimulation of the affected side, excepting one that was tested following remission of numbness. BR was the second most sensitive neurophysiological method with 6/9 examinations exhibiting an abnormal R1 component. The MR was abnormal in 3/6 patients, always on the affected side. Facial conduction and EMG studies were normal in all patients but one. Facial numbness was always related to abnormal TSEPs. A concomitant R1 abnormality on BR allowed localization of the responsible pontine lesion, which closely corresponded with MRI findings. We conclude that neurophysiological assessment of MS patients with facial numbness is a sensitive tool, which complements MRI, and can improve lesion localization. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.

  5. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    PubMed

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    Scherer's Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes.

  6. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion

    PubMed Central

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R.

    2015-01-01

    Scherer’s Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes. PMID:26295338

  7. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  8. Dividing the Self: Distinct Neural Substrates of Task-Based and Automatic Self-Prioritization after Brain Damage

    ERIC Educational Resources Information Center

    Sui, Jie; Chechlacz, Magdalena; Humphreys, Glyn W.

    2012-01-01

    Facial self-awareness is a basic human ability dependent on a distributed bilateral neural network and revealed through prioritized processing of our own over other faces. Using non-prosopagnosic patients we show, for the first time, that facial self-awareness can be fractionated into different component processes. Patients performed two face…

  9. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  10. Intraspecific Variation in Learning: Worker Wasps Are Less Able to Learn and Remember Individual Conspecific Faces than Queen Wasps.

    PubMed

    Tibbetts, Elizabeth A; Injaian, Allison; Sheehan, Michael J; Desjardins, Nicole

    2018-05-01

    Research on individual recognition often focuses on species-typical recognition abilities rather than assessing intraspecific variation in recognition. As individual recognition is cognitively costly, the capacity for recognition may vary within species. We test how individual face recognition differs between nest-founding queens (foundresses) and workers in Polistes fuscatus paper wasps. Individual recognition mediates dominance interactions among foundresses. Three previously published experiments have shown that foundresses (1) benefit by advertising their identity with distinctive facial patterns that facilitate recognition, (2) have robust memories of individuals, and (3) rapidly learn to distinguish between face images. Like foundresses, workers have variable facial patterns and are capable of individual recognition. However, worker dominance interactions are muted. Therefore, individual recognition may be less important for workers than for foundresses. We find that (1) workers with unique faces receive amounts of aggression similar to those of workers with common faces, indicating that wasps do not benefit from advertising their individual identity with a unique appearance; (2) workers lack robust memories for individuals, as they cannot remember unique conspecifics after a 6-day separation; and (3) workers learn to distinguish between facial images more slowly than foundresses during training. The recognition differences between foundresses and workers are notable because Polistes lack discrete castes; foundresses and workers are morphologically similar, and workers can take over as queens. Overall, social benefits and receiver capacity for individual recognition are surprisingly plastic.

  11. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  12. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  13. The axillary approach to raising the latissimus dorsi free flap for facial re-animation: a descriptive surgical technique.

    PubMed

    Leckenby, Jonathan; Butler, Daniel; Grobbelaar, Adriaan

    2015-01-01

    The latissimus dorsi flap is popular due to the versatile nature of its applications. When used as a pedicled flap it provides a robust solution when soft tissue coverage is required following breast, thoracic and head and neck surgery. Its utilization as a free flap is extensive due to the muscle's size, constant anatomy, large caliber of the pedicle and the fact it can be used for functional muscle transfers. In facial palsy it provides the surgeon with a long neurovascular pedicle that is invaluable in situations where commonly used facial vessels are not available, in congenital cases or where previous free functional muscle transfers have been attempted, or patients where a one-stage procedure is indicated and a long nerve is required to reach the contra-lateral side. Although some facial palsy surgeons use the trans-axillary approach, an operative guide of raising the flap by this method has not been provided. A clear guide of raising the flap with the patient in the supine position is described in detail and offers the benefits of reducing the risk of potential brachial plexus injury and allows two surgical teams to work synchronously to reduce operative time.

  14. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  15. Facial duplication: case, review, and embryogenesis.

    PubMed

    Barr, M

    1982-04-01

    The craniofacial anatomy of an infant with facial duplication is described. There were four eyes, two noses, two maxillae, and one mandible. Anterior to the single pituitary the brain was duplicated and there was bilateral arhinencephaly. Portions of the brain were extruded into a large frontal encephalocele. Cases of symmetrical facial duplication reported in the literature range from two complete faces on a single head (diprosopus) to simple nasal duplication. The variety of patterns of duplication suggests that the doubling of facial components arises in several different ways: Forking of the notochord, duplication of the prosencephalon, duplication of the olfactory placodes, and duplication of maxillary and/or mandibular growth centers around the margins of the stomatodeal plate. Among reported cases, the female:male ratio is 2:1.

  16. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    PubMed

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  17. Alexithymia and the labeling of facial emotions: response slowing and increased motor and somatosensory processing

    PubMed Central

    2014-01-01

    Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094

  18. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  19. Is moral beauty different from facial beauty? Evidence from an fMRI study.

    PubMed

    Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald

    2015-06-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. [Emotion Recognition in Patients with Peripheral Facial Paralysis - A Pilot Study].

    PubMed

    Konnerth, V; Mohr, G; von Piekartz, H

    2016-02-01

    The perception of emotions is an important component in enabling human beings to social interaction in everyday life. Thus, the ability to recognize the emotions of the other one's mime is a key prerequisite for this. The following study aimed at evaluating the ability of subjects with 'peripheral facial paresis' to perceive emotions in healthy individuals. A pilot study was conducted in which 13 people with 'peripheral facial paresis' participated. This assessment included the 'Facially Expressed Emotion Labeling-Test' (FEEL-Test), the 'Facial-Laterality-Recognition Test' (FLR-Test) and the 'Toronto-Alexithymie-Scale 26' (TAS 26). The results were compared with data of healthy people from other studies. In contrast to healthy patients, the subjects with 'facial paresis' show more difficulties in recognizing basic emotions; however the results are not significant. The participants show a significant lower level of speed (right/left: p<0.001) concerning the perception of facial laterality compared to healthy people. With regard to the alexithymia, the tested group reveals significantly higher results (p<0.001) compared to the unimpaired people. The present pilot study does not prove any impact on this specific patient group's ability to recognize emotions and facial laterality. For future studies the research question should be verified in a larger sample size. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    PubMed

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    PubMed

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  3. Component Structure of Individual Differences in True and False Recognition of Faces

    ERIC Educational Resources Information Center

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  4. Functionally distinct smiles elicit different physiological responses in an evaluative context.

    PubMed

    Martin, Jared D; Abercrombie, Heather C; Gilboa-Schechtman, Eva; Niedenthal, Paula M

    2018-03-01

    When people are being evaluated, their whole body responds. Verbal feedback causes robust activation in the hypothalamic-pituitary-adrenal (HPA) axis. What about nonverbal evaluative feedback? Recent discoveries about the social functions of facial expression have documented three morphologically distinct smiles, which serve the functions of reinforcement, social smoothing, and social challenge. In the present study, participants saw instances of one of three smile types from an evaluator during a modified social stress test. We find evidence in support of the claim that functionally different smiles are sufficient to augment or dampen HPA axis activity. We also find that responses to the meanings of smiles as evaluative feedback are more differentiated in individuals with higher baseline high-frequency heart rate variability (HF-HRV), which is associated with facial expression recognition accuracy. The differentiation is especially evident in response to smiles that are more ambiguous in context. Findings suggest that facial expressions have deep physiological implications and that smiles regulate the social world in a highly nuanced fashion.

  5. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    PubMed

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  6. Suppression on your own terms: internally generated displays of craving suppression predict rebound effects.

    PubMed

    Sayers, W Michael; Sayette, Michael A

    2013-09-01

    Research on emotion suppression has shown a rebound effect, in which expression of the targeted emotion increases following a suppression attempt. In prior investigations, participants have been explicitly instructed to suppress their responses, which has drawn the act of suppression into metaconsciousness. Yet emerging research emphasizes the importance of nonconscious approaches to emotion regulation. This study is the first in which a craving rebound effect was evaluated without simultaneously raising awareness about suppression. We aimed to link spontaneously occurring attempts to suppress cigarette craving to increased smoking motivation assessed immediately thereafter. Smokers (n = 66) received a robust cued smoking-craving manipulation while their facial responses were videotaped and coded using the Facial Action Coding System. Following smoking-cue exposure, participants completed a behavioral choice task previously found to index smoking motivation. Participants evincing suppression-related facial expressions during cue exposure subsequently valued smoking more than did those not displaying these expressions, which suggests that internally generated suppression can exert powerful rebound effects.

  7. Effects of task demands on the early neural processing of fearful and happy facial expressions.

    PubMed

    Itier, Roxane J; Neath-Tavares, Karly N

    2017-05-15

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Responsibility and the sense of agency enhance empathy for pain

    PubMed Central

    Lepron, Evelyne; Causse, Michaël; Farrer, Chlöé

    2015-01-01

    Being held responsible for our actions strongly determines our moral judgements and decisions. This study examined whether responsibility also influences our affective reaction to others' emotions. We conducted two experiments in order to assess the effect of responsibility and of a sense of agency (the conscious feeling of controlling an action) on the empathic response to pain. In both experiments, participants were presented with video clips showing an actor's facial expression of pain of varying intensity. The empathic response was assessed with behavioural (pain intensity estimation from facial expressions and unpleasantness for the observer ratings) and electrophysiological measures (facial electromyography). Experiment 1 showed enhanced empathic response (increased unpleasantness for the observer and facial electromyography responses) as participants' degree of responsibility for the actor's pain increased. This effect was mainly accounted for by the decisional component of responsibility (compared with the execution component). In addition, experiment 2 found that participants' unpleasantness rating also increased when they had a sense of agency over the pain, while controlling for decision and execution processes. The findings suggest that increased empathy induced by responsibility and a sense of agency may play a role in regulating our moral conduct. PMID:25473014

  9. Choristoma of the middle ear: a component of a new syndrome?

    PubMed

    Buckmiller, L M; Brodie, H A; Doyle, K J; Nemzek, W

    2001-05-01

    Salivary choristoma of the middle ear is a rare entity. The authors report the 26th known case, which is unique in several respects: the patient had abnormalities of the first and second branchial arches, as well as the otic capsule and facial nerve in ways not yet reported. Our patient presented with bilateral preauricular pits, conchal bands, an ipsilateral facial palsy, and bilateral Mondini-type deformities. A review of the literature revealed salivary choristomas of the middle ear to be frequently associated with branchial arch abnormalities, most commonly the second, as well as abnormalities of the facial nerve. All 25 cases were reviewed and the results reported with respect to clinical presentation, associated abnormalities, operative findings, and hearing results. It has been proposed that choristoma of the middle ear may represent a component of a syndrome along with unilateral hearing loss, abnormalities of the incus and/or stapes, and anomalies of the facial nerve. Eighty-six percent of the reported patients with choristoma have three or four of the four criteria listed to designate middle ear salivary choristoma as part of a syndrome. In the remaining four patients, all of the structures were not assessed.

  10. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    PubMed

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  11. Visual Speech Contributes to Phonetic Learning in 6-Month-Old Infants

    ERIC Educational Resources Information Center

    Teinonen, Tuomas; Aslin, Richard N.; Alku, Paavo; Csibra, Gergely

    2008-01-01

    Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. "Science, 218", 1138-1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. "Infant…

  12. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    PubMed

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  13. More than mere mimicry? The influence of emotion on rapid facial reactions to faces.

    PubMed

    Moody, Eric J; McIntosh, Daniel N; Mann, Laura J; Weisser, Kimberly R

    2007-05-01

    Within a second of seeing an emotional facial expression, people typically match that expression. These rapid facial reactions (RFRs), often termed mimicry, are implicated in emotional contagion, social perception, and embodied affect, yet ambiguity remains regarding the mechanism(s) involved. Two studies evaluated whether RFRs to faces are solely nonaffective motor responses or whether emotional processes are involved. Brow (corrugator, related to anger) and forehead (frontalis, related to fear) activity were recorded using facial electromyography (EMG) while undergraduates in two conditions (fear induction vs. neutral) viewed fear, anger, and neutral facial expressions. As predicted, fear induction increased fear expressions to angry faces within 1000 ms of exposure, demonstrating an emotional component of RFRs. This did not merely reflect increased fear from the induction, because responses to neutral faces were unaffected. Considering RFRs to be merely nonaffective automatic reactions is inaccurate. RFRs are not purely motor mimicry; emotion influences early facial responses to faces. The relevance of these data to emotional contagion, autism, and the mirror system-based perspectives on imitation is discussed.

  14. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  15. Comparative Discussion on Psychophysiological Effect of Self-administered Facial Massage by Treatment Method

    NASA Astrophysics Data System (ADS)

    Nozawa, Akio; Takei, Yuya

    The aim of study was to quantitatively evaluate the effects of self-administered facial massage, which was done by hand or facial roller. In this study, the psychophysiological effects of facial massage were evaluated. The central nerves system and the autonomic nervous system were administered to evaluate physiological system. The central nerves system was assessed by Electroencephalogram (EEG). The autonomic nervous system were assessed by peripheral skin temperature(PST) and heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, the high-frequency components (HF) were evaluated. State-Trait Anxiety Inventory (STAI), Profile of Mood Status (POMS) and subjective sensory amount with Visual Analog Scale (VAS) were administered to evaluate psychological status. These results suggest that kept brain activity and had strong effects on stress alleviation.

  16. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion. Copyright © 2014 the authors 0270-6474/14/3412457-13$15.00/0.

  17. Social perception of morbidity in facial nerve paralysis.

    PubMed

    Li, Matthew Ka Ki; Niles, Navin; Gore, Sinclair; Ebrahimi, Ardalan; McGuinness, John; Clark, Jonathan Robert

    2016-08-01

    There are many patient-based and clinician-based scales measuring the severity of facial nerve paralysis and the impact on quality of life, however, the social perception of facial palsy has received little attention. The purpose of this pilot study was to measure the consequences of facial paralysis on selected domains of social perception and compare the social impact of paralysis of the different components. Four patients with typical facial palsies (global, marginal mandibular, zygomatic/buccal, and frontal) and 1 control were photographed. These images were each shown to 100 participants who subsequently rated variables of normality, perceived distress, trustworthiness, intelligence, interaction, symmetry, and disability. Statistical analysis was performed to compare the results among each palsy. Paralyzed faces were considered less normal compared to the control on a scale of 0 to 10 (mean, 8.6; 95% confidence interval [CI] = 8.30-8.86) with global paralysis (mean, 3.4; 95% CI = 3.08-3.80) rated as the most disfiguring, followed by the zygomatic/buccal (mean, 6.0; 95% CI = 5.68-6.37), marginal (mean, 6.5; 95% CI = 6.08-6.86), and then temporal palsies (mean, 6.9; 95% CI = 6.57-7.21). Similar trends were seen when analyzing these palsies for perceived distress, intelligence, and trustworthiness, using a random effects regression model. Our sample suggests that society views paralyzed faces as less normal, less trustworthy, and more distressed. Different components of facial paralysis are worse than others and surgical correction may need to be prioritized in an evidence-based manner with social morbidity in mind. © 2016 Wiley Periodicals, Inc. Head Neck 38:1158-1163, 2016. © 2016 Wiley Periodicals, Inc.

  18. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  19. Retention interval affects visual short-term memory encoding.

    PubMed

    Bankó, Eva M; Vidnyánszky, Zoltán

    2010-03-01

    Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.

  20. Fixation to features and neural processing of facial expressions in a gender discrimination task

    PubMed Central

    Neath, Karly N.; Itier, Roxane J.

    2017-01-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (~120 ms) for happy faces was seen at occipital sites and was sustained until ~350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ~150 ms until ~300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. PMID:26277653

  1. The effects of facial color and inversion on the N170 event-related potential (ERP) component.

    PubMed

    Minami, T; Nakajima, K; Changvisommid, L; Nakauchi, S

    2015-12-17

    Faces are important for social interaction because much can be perceived from facial details, including a person's race, age, and mood. Recent studies have shown that both configural (e.g. face shape and inversion) and surface information (e.g. surface color and reflectance properties) are important for face perception. Therefore, the present study examined the effects of facial color and inverted face properties on event-related potential (ERP) responses, particularly the N170 component. Stimuli consisted of natural and bluish-colored faces. Faces were presented in both upright and upside down orientations. An ANOVA was used to analyze N170 amplitudes and verify the effects of the main independent variables. Analysis of N170 amplitude revealed the significant interactions between stimulus orientation and color. Subsequent analysis indicated that N170 was larger for bluish-colored faces than natural-colored faces, and N170 to natural-colored faces was larger in response to inverted stimulus as compared to upright stimulus. Additionally, a multivariate pattern analysis (MVPA) investigated face-processing dynamics without any prior assumptions. Results distinguished, above chance, both facial color and orientation from single-trial electroencephalogram (EEG) signals. Decoding performance for color classification of inverted faces was significantly diminished as compared to an upright orientation. This suggests that processing orientation is predominant over facial color. Taken together, the present findings elucidate the temporal and spatial distribution of orientation and color processing during face processing. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Body size and allometric variation in facial shape in children.

    PubMed

    Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt

    2018-02-01

    Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.

  3. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    PubMed

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Following the time course of face gender and expression processing: a task-dependent ERP study.

    PubMed

    Valdés-Conroy, Berenice; Aguado, Luis; Fernández-Cahill, María; Romero-Ferreiro, Verónica; Diéguez-Risco, Teresa

    2014-05-01

    The effects of task demands and the interaction between gender and expression in face perception were studied using event-related potentials (ERPs). Participants performed three different tasks with male and female faces that were emotionally inexpressive or that showed happy or angry expressions. In two of the tasks (gender and expression categorization) facial properties were task-relevant while in a third task (symbol discrimination) facial information was irrelevant. Effects of expression were observed on the visual P100 component under all task conditions, suggesting the operation of an automatic process that is not influenced by task demands. The earliest interaction between expression and gender was observed later in the face-sensitive N170 component. This component showed differential modulations by specific combinations of gender and expression (e.g., angry male vs. angry female faces). Main effects of expression and task were observed in a later occipito-temporal component peaking around 230 ms post-stimulus onset (EPN or early posterior negativity). Less positive amplitudes in the presence of angry faces and during performance of the gender and expression tasks were observed. Finally, task demands also modulated a positive component peaking around 400 ms (LPC, or late positive complex) that showed enhanced amplitude for the gender task. The pattern of results obtained here adds new evidence about the sequence of operations involved in face processing and the interaction of facial properties (gender and expression) in response to different task demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. In the face of emotions: event-related potentials in supraliminal and subliminal facial expression recognition.

    PubMed

    Balconi, Michela; Lucchiari, Claudio

    2005-02-01

    Is facial expression recognition marked by specific event-related potentials (ERPs) effects? Are conscious and unconscious elaborations of emotional facial stimuli qualitatively different processes? In Experiment 1, ERPs elicited by supraliminal stimuli were recorded when 21 participants viewed emotional facial expressions of four emotions and a neutral stimulus. Two ERP components (N2 and P3) were analyzed for their peak amplitude and latency measures. First, emotional face-specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). A more posterior distribution of ERPs was found for N2. Moreover, a lateralization effect was revealed for negative (right lateralization) and positive (left lateralization) facial expressions. In Experiment 2 (20 participants), 1-ms subliminal stimulation was carried out. Unaware information processing was revealed to be quite similar to aware information processing for peak amplitude but not for latency. In fact, unconscious stimulation produced a more delayed peak variation than conscious stimulation.

  6. Automatic segmentation of the facial nerve and chorda tympani using image registration and statistical priors

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit M.

    2008-03-01

    In cochlear implant surgery, an electrode array is permanently implanted in the cochlea to stimulate the auditory nerve and allow deaf people to hear. A minimally invasive surgical technique has recently been proposed--percutaneous cochlear access--in which a single hole is drilled from the skull surface to the cochlea. For the method to be feasible, a safe and effective drilling trajectory must be determined using a pre-operative CT. Segmentation of the structures of the ear would improve trajectory planning safety and efficiency and enable the possibility of automated planning. Two important structures of the ear, the facial nerve and chorda tympani, present difficulties in intensity based segmentation due to their diameter (as small as 1.0 and 0.4 mm) and adjacent inter-patient variable structures of similar intensity in CT imagery. A multipart, model-based segmentation algorithm is presented in this paper that accomplishes automatic segmentation of the facial nerve and chorda tympani. Segmentation results are presented for 14 test ears and are compared to manually segmented surfaces. The results show that mean error in structure wall localization is 0.2 and 0.3 mm for the facial nerve and chorda, proving the method we propose is robust and accurate.

  7. The Axillary Approach to Raising the Latissimus Dorsi Free Flap for Facial Re-Animation: A Descriptive Surgical Technique

    PubMed Central

    Butler, Daniel; Grobbelaar, Adriaan

    2015-01-01

    The latissimus dorsi flap is popular due to the versatile nature of its applications. When used as a pedicled flap it provides a robust solution when soft tissue coverage is required following breast, thoracic and head and neck surgery. Its utilization as a free flap is extensive due to the muscle's size, constant anatomy, large caliber of the pedicle and the fact it can be used for functional muscle transfers. In facial palsy it provides the surgeon with a long neurovascular pedicle that is invaluable in situations where commonly used facial vessels are not available, in congenital cases or where previous free functional muscle transfers have been attempted, or patients where a one-stage procedure is indicated and a long nerve is required to reach the contra-lateral side. Although some facial palsy surgeons use the trans-axillary approach, an operative guide of raising the flap by this method has not been provided. A clear guide of raising the flap with the patient in the supine position is described in detail and offers the benefits of reducing the risk of potential brachial plexus injury and allows two surgical teams to work synchronously to reduce operative time. PMID:25606493

  8. Robust kernel representation with statistical local features for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David

    2013-06-01

    Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.

  9. Auto white balance method using a pigmentation separation technique for human skin color

    NASA Astrophysics Data System (ADS)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2017-02-01

    The human visual system maintains the perception of colors of an object across various light sources. Similarly, current digital cameras feature an auto white balance function, which estimates the illuminant color and corrects the color of a photograph as if the photograph was taken under a certain light source. The main subject in a photograph is often a person's face, which could be used to estimate the illuminant color. However, such estimation is adversely affected by differences in facial colors among individuals. The present paper proposes an auto white balance algorithm based on a pigmentation separation method that separates the human skin color image into the components of melanin, hemoglobin and shading. Pigment densities have a uniform property within the same race that can be calculated from the components of melanin and hemoglobin in the face. We, thus, propose a method that uses the subject's facial color in an image and is unaffected by individual differences in facial color among Japanese people.

  10. Artifacts produced during electrical stimulation of the vestibular nerve in cats. [autonomic nervous system components of motion sickness

    NASA Technical Reports Server (NTRS)

    Tang, P. C.

    1973-01-01

    Evidence is presented to indicate that evoked potentials in the recurrent laryngeal, the cervical sympathetic, and the phrenic nerve, commonly reported as being elicited by vestibular nerve stimulation, may be due to stimulation of structures other than the vestibular nerve. Experiments carried out in decerebrated cats indicated that stimulation of the petrous bone and not that of the vestibular nerve is responsible for the genesis of evoked potentials in the recurrent laryngeal and the cervical sympathetic nerves. The phrenic response to electrical stimulation applied through bipolar straight electrodes appears to be the result of stimulation of the facial nerve in the facial canal by current spread along the petrous bone, since stimulation of the suspended facial nerve evoked potentials only in the phrenic nerve and not in the recurrent laryngeal nerve. These findings indicate that autonomic components of motion sickness represent the secondary reactions and not the primary responses to vestibular stimulation.

  11. Does skull shape mediate the relationship between objective features and subjective impressions about the face?

    PubMed

    Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2013-10-01

    In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Associations of physical strength with facial shape in an African pastoralist society, the Maasai of Northern Tanzania.

    PubMed

    Butovskaya, Marina L; Windhager, Sonja; Karelin, Dimitri; Mezentseva, Anna; Schaefer, Katrin; Fink, Bernhard

    2018-01-01

    Previous research has documented associations of physical strength and facial morphology predominantly in men of Western societies. Faces of strong men tend to be more robust, are rounder and have a prominent jawline compared with faces of weak men. Here, we investigate whether the morphometric patterns of strength-face relationships reported for members of industrialized societies can also be found in members of an African pastoralist society, the Maasai of Northern Tanzania. Handgrip strength (HGS) measures and facial photographs were collected from a sample of 185 men and 120 women of the Maasai in the Ngorongoro Conservation Area. In young-adults (20-29 years; n = 95) and mid-adults (30-50 years; n = 114), we digitized 71 somatometric landmarks and semilandmarks to capture variation in facial morphology and performed shape regressions of landmark coordinates upon HGS. Results were visualized in the form of thin-plate plate spline deformation grids and geometric morphometric morphs. Individuals with higher HGS tended to have wider faces with a lower and broader forehead, a wider distance between the medial canthi of the eyes, a wider nose, fuller lips, and a larger, squarer lower facial outline compared with weaker individuals of the same age-sex group. In mid-adult men, these associations were weaker than in the other age-sex groups. We conclude that the patterns of HGS relationships with face shape in the Maasai are similar to those reported from related investigations in samples of industrialized societies. We discuss differences between the present and related studies with regard to knowledge about the causes for age- and sex-related facial shape variation and physical strength associations.

  13. Facial thermal variations: A new marker of emotional arousal.

    PubMed

    Kosonogov, Vladimir; De Zorzi, Lucas; Honoré, Jacques; Martínez-Velázquez, Eduardo S; Nandrino, Jean-Louis; Martinez-Selva, José M; Sequeira, Henrique

    2017-01-01

    Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.

  14. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    PubMed

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  15. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  16. Attentional and affective biases for attractive females emerge early in development.

    PubMed

    Rennels, Jennifer Lynn; Verba, Stephanie Ann

    2017-01-01

    Predominant experience with females early in development results in infants developing an attractive, female-like facial representation that guides children's attention toward and affective preferences for attractive females. When combined with increased interest in the other sex at puberty, these early emerging biases might help explain the robust prosocial and financial biases men exhibit toward attractive women during adulthood.

  17. Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces

    PubMed Central

    Chaby, Laurence; Hupont, Isabelle; Avril, Marie; Luherne-du Boullay, Viviane; Chetouani, Mohamed

    2017-01-01

    The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for “joy” and “disgust,” and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive “social signature” of emotional identification in aging. Younger adults, however, were more dispersed in terms of gaze behavior and used a more exploratory-gaze strategy, consisting in repeatedly visiting both facial areas. PMID:28450841

  18. [Partial facial duplication (a rare diprosopus): Case report and review of the literature].

    PubMed

    Es-Seddiki, A; Rkain, M; Ayyad, A; Nkhili, H; Amrani, R; Benajiba, N

    2015-12-01

    Diprosopus, or partial facial duplication, is a very rare congenital abnormality. It is a rare form of conjoined twins. Partial facial duplication may be symmetric or not and may involve the nose, the maxilla, the mandible, the palate, the tongue and the mouth. A male newborn springing from inbred parents was admitted at his first day of life for facial deformity. He presented with hypertelorism, 2 eyes, a tendency to nose duplication (flatted large nose, 2 columellae, 2 lateral nostrils separated in the midline by a third deformed hole), two mouths and a duplicated maxilla. Laboratory tests were normal. The cranio-facial CT confirmed the maxillary duplication. This type of cranio-facial duplication is a rare entity with about 35 reported cases in the literature. Our patient was similar to a rare case of living diprosopus reported by Stiehm in 1972. Diprosopus is often associated with abnormalities of the gastrointestinal tract, the central nervous system, the cardiovascular and respiratory systems and with a high incidence of cleft lip and palate. Surgical treatment consists in the resection of the duplicated components. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  19. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry

    PubMed Central

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won

    2017-01-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries. PMID:28523246

  20. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry.

    PubMed

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-05-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries.

  1. Automatic prediction of facial trait judgments: appearance vs. structural models.

    PubMed

    Rojas, Mario; Masip, David; Todorov, Alexander; Vitria, Jordi

    2011-01-01

    Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  2. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    PubMed

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  3. Three-dimensional gender differences in facial form of children in the North East of England.

    PubMed

    Bugaighis, Iman; Mattick, Clare R; Tiddeman, Bernard; Hobson, Ross

    2013-06-01

    The aim of the prospective cross-sectional morphometric study was to explore three dimensional (3D) facial shape and form (shape plus size) variation within and between 8- and 12-year-old Caucasian children; 39 males age-matched with 41 females. The 3D images were captured using a stereophotogrammeteric system, and facial form was recorded by digitizing 39 anthropometric landmarks for each scan. The x, y, z coordinates of each landmark were extracted and used to calculate linear and angular measurements. 3D landmark asymmetry was quantified using Generalized Procrustes Analysis (GPA) and an average face was constructed for each gender. The average faces were superimposed and differences were visualized and quantified. Shape variations were explored using GPA and PrincipalComponent Analysis. Analysis of covariance and Pearson correlation coefficients were used to explore gender differences and to determine any correlation between facial measurements and height or weight. Multivariate analysis was used to ascertain differences in facial measurements or 3D landmark asymmetry. There were no differences in height or weight between genders. There was a significant positive correlation between facial measurements and height and weight and statistically significant differences in linear facial width measurements between genders. These differences were related to the larger size of males rather than differences in shape. There were no age- or gender-linked significant differences in 3D landmark asymmetry. Shape analysis confirmed similarities between both males and females for facial shape and form in 8- to 12-year-old children. Any differences found were related to differences in facial size rather than shape.

  4. Facial expressions and pair bonds in hylobatids.

    PubMed

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony, or rates of behavioral synchrony. We found that FES was the strongest measure of hylobatid expressiveness and was significantly positively correlated with higher sociality index scores; however, FES showed no significant correlation with behavioral synchrony. No noticeable differences between pairs were found regarding rates of behavioral or territorial synchrony. Facial repertoire sizes and FES were not significantly correlated with rates of behavioral synchrony or territorial synchrony. Our study confirms an important role of facial expressions in maintaining pair bonds and coordinating activities in hylobatids. Data support the hypothesis that facial expressions and sociality have been linked in hylobatid and primate evolution. It is possible that larger facial repertoires may have contributed to strengthening pair bonds in primates, because richer facial repertoires provide more opportunities for FES which can effectively increase the "understanding" between partners through smoother coordination of interaction patterns. This study supports the social complexity hypothesis as the driving force for the evolution of complex communication signaling. © 2018 Wiley Periodicals, Inc.

  5. Early and late temporo-spatial effects of contextual interference during perception of facial affect.

    PubMed

    Frühholz, Sascha; Fehr, Thorsten; Herrmann, Manfred

    2009-10-01

    Contextual features during recognition of facial affect are assumed to modulate the temporal course of emotional face processing. Here, we simultaneously presented colored backgrounds during valence categorizations of facial expressions. Subjects incidentally learned to perceive negative, neutral and positive expressions within a specific colored context. Subsequently, subjects made fast valence judgments while presented with the same face-color-combinations as in the first run (congruent trials) or with different face-color-combinations (incongruent trials). Incongruent trials induced significantly increased response latencies and significantly decreased performance accuracy. Contextual incongruent information during processing of neutral expressions modulated the P1 and the early posterior negativity (EPN) both localized in occipito-temporal areas. Contextual congruent information during emotional face perception revealed an emotion-related modulation of the P1 for positive expressions and of the N170 and the EPN for negative expressions. Highest amplitude of the N170 was found for negative expressions in a negatively associated context and the N170 amplitude varied with the amount of overall negative information. Incongruent trials with negative expressions elicited a parietal negativity which was localized to superior parietal cortex and which most likely represents a posterior manifestation of the N450 as an indicator of conflict processing. A sustained activation of the late LPP over parietal cortex for all incongruent trials might reflect enhanced engagement with facial expression during task conditions of contextual interference. In conclusion, whereas early components seem to be sensitive to the emotional valence of facial expression in specific contexts, late components seem to subserve interference resolution during emotional face processing.

  6. Gender, age, and psychosocial context of the perception of facial esthetics.

    PubMed

    Tole, Nikoleta; Lajnert, Vlatka; Kovacevic Pavicic, Daniela; Spalj, Stjepan

    2014-01-01

    To explore the effects of gender, age, and psychosocial context on the perception of facial esthetics. The study included 1,444 Caucasian subjects aged 16 to 85 years. Two sets of color photographs illustrating 13 male and 13 female Caucasian facial type alterations, representing different skeletal and dentoalveolar components of sagittal maxillary-mandibular relationships, were used to estimate the facial profile attractiveness. The examinees graded the profiles based on a 0 to 10 numerical rating scale. The examinees graded the profiles of their own sex only from a social perspective, whereas opposite sex profiles were graded both from the social and emotional perspective separately. The perception of facial esthetics was found to be related to the gender, age, and psychosocial context of evaluation (p < 0.05). The most attractive profiles to men are the orthognathic female profile from the social perspective and the moderate bialveolar protrusion from the emotional perspective. The most attractive profile to women is the orthognathic male profile, when graded from the social aspect, and the mild bialveolar retrusion when graded from the emotional aspect. The age increase of the assessor results in a higher attractiveness grade. When planning treatment that modifies the facial profile, the clinician should bear in mind that the perception of facial profile esthetics is a complex phenomenon influenced by biopsychosocial factors. This study allows a better understanding of the concept of perception of facial esthetics that includes gender, age, and psychosocial context. © 2013 Wiley Periodicals, Inc.

  7. Unusual complication after genioplasty.

    PubMed

    Avelar, Rafael Linard; Sá, Carlos Diego Lopes; Esses, Diego Felipe Silveira; Becker, Otávio Emmel; Soares, Eduardo Costa Studart; de Oliveira, Rogerio Belle

    2014-01-01

    Facial beauty depends on shape, proportion, and harmony between the facial thirds. The chin is one of the most important components of the inferior third and has an important role on the definition of facial aesthetic and harmony in both frontal and lateral views. There are 2 principal therapeutic approaches that one can choose to treat mental deformities, alloplastic implants, and mental basilar ostectomy, also known as genioplasty. The latest is more commonly used because of great versatility in the correction of three-dimensional deformities of the chin and smaller taxes of postoperative complications. Possible transoperative and postoperative complications of genioplasty include mental nerve lesion, bleeding, damage to tooth roots, bone resorption of the mobilized segment, mandibular fracture, ptosis of the lower lip, and failure to stabilize the ostectomized segment. The study presents 2 cases of displacement of the osteotomized segment after genioplasty associated with facial trauma during postoperative orthognathic surgery followed by rare complications with no reports in the literature.

  8. The fallopian canal: a comprehensive review and proposal of a new classification.

    PubMed

    Mortazavi, M M; Latif, B; Verma, K; Adeeb, N; Deep, A; Griessenauer, C J; Tubbs, R S; Fukushima, T

    2014-03-01

    The facial nerve follows a complex course through the skull base. Understanding its anatomy is crucial during standard skull base approaches and resection of certain skull base tumors closely related to the nerve, especially, tumors at the cerebellopontine angle. Herein, we review the fallopian canal and its implications in surgical approaches to the skull base. Furthermore, we suggest a new classification. Based on the anatomy and literature, we propose that the meatal segment of the facial nerve be included as a component of the fallopian canal. A comprehensive knowledge of the course of the facial nerve is important to those who treat patients with pathology of or near this cranial nerve.

  9. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  10. The Effect of Secure Attachment State and Infant Facial Expressions on Childless Adults' Parental Motivation.

    PubMed

    Ding, Fangyuan; Zhang, Dajun; Cheng, Gang

    2016-01-01

    This study examined the association between infant facial expressions and parental motivation as well as the interaction between attachment state and expressions. Two-hundred eighteen childless adults (M age = 19.22, 118 males, 100 females) were recruited. Participants completed the Chinese version of the State Adult Attachment Measure and the E-prime test, which comprised three components (a) liking, the specific hedonic experience in reaction to laughing, neutral, and crying infant faces; (b) representational responding, actively seeking infant faces with specific expressions; and (c) evoked responding, actively retaining images of three different infant facial expressions. While the first component refers to the "liking" of infants, the second and third components entail the "wanting" of an infant. Random intercepts multilevel models with emotion nested within participants revealed a significant interaction between secure attachment state and emotion on both liking and representational response. A hierarchical regression analysis was conducted to examine the unique contributions of secure attachment state. Findings demonstrated that, after controlling for sex, anxious, and avoidant, secure attachment state positively predicted parental motivations (liking and wanting) in the neutral and crying conditions, but not the laughing condition. These findings demonstrate the significant role of secure attachment state in parental motivation, specifically when infants display uncertain and negative emotions.

  11. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    PubMed

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story

    PubMed Central

    Rhodes, Gillian; Lie, Hanne C.; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W.

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness. PMID:22096491

  13. Fusiform gyrus face selectivity relates to individual differences in facial recognition ability.

    PubMed

    Furl, Nicholas; Garrido, Lúcia; Dolan, Raymond J; Driver, Jon; Duchaine, Bradley

    2011-07-01

    Regions of the occipital and temporal lobes, including a region in the fusiform gyrus (FG), have been proposed to constitute a "core" visual representation system for faces, in part because they show face selectivity and face repetition suppression. But recent fMRI studies of developmental prosopagnosics (DPs) raise questions about whether these measures relate to face processing skills. Although DPs manifest deficient face processing, most studies to date have not shown unequivocal reductions of functional responses in the proposed core regions. We scanned 15 DPs and 15 non-DP control participants with fMRI while employing factor analysis to derive behavioral components related to face identification or other processes. Repetition suppression specific to facial identities in FG or to expression in FG and STS did not show compelling relationships with face identification ability. However, we identified robust relationships between face selectivity and face identification ability in FG across our sample for several convergent measures, including voxel-wise statistical parametric mapping, peak face selectivity in individually defined "fusiform face areas" (FFAs), and anatomical extents (cluster sizes) of those FFAs. None of these measures showed associations with behavioral expression or object recognition ability. As a group, DPs had reduced face-selective responses in bilateral FFA when compared with non-DPs. Individual DPs were also more likely than non-DPs to lack expected face-selective activity in core regions. These findings associate individual differences in face processing ability with selectivity in core face processing regions. This confirms that face selectivity can provide a valid marker for neural mechanisms that contribute to face identification ability.

  14. Effects of noninvasive facial nerve stimulation in the dog middle cerebral artery occlusion model of ischemic stroke.

    PubMed

    Borsody, Mark K; Yamada, Chisa; Bielawski, Dawn; Heaton, Tamara; Castro Prado, Fernando; Garcia, Andrea; Azpiroz, Joaquín; Sacristan, Emilio

    2014-04-01

    Facial nerve stimulation has been proposed as a new treatment of ischemic stroke because autonomic components of the nerve dilate cerebral arteries and increase cerebral blood flow when activated. A noninvasive facial nerve stimulator device based on pulsed magnetic stimulation was tested in a dog middle cerebral artery occlusion model. We used an ischemic stroke dog model involving injection of autologous blood clot into the internal carotid artery that reliably embolizes to the middle cerebral artery. Thirty minutes after middle cerebral artery occlusion, the geniculate ganglion region of the facial nerve was stimulated for 5 minutes. Brain perfusion was measured using gadolinium-enhanced contrast MRI, and ATP and total phosphate levels were measured using 31P spectroscopy. Separately, a dog model of brain hemorrhage involving puncture of the intracranial internal carotid artery served as an initial examination of facial nerve stimulation safety. Facial nerve stimulation caused a significant improvement in perfusion in the hemisphere affected by ischemic stroke and a reduction in ischemic core volume in comparison to sham stimulation control. The ATP/total phosphate ratio showed a large decrease poststroke in the control group versus a normal level in the stimulation group. The same stimulation administered to dogs with brain hemorrhage did not cause hematoma enlargement. These results support the development and evaluation of a noninvasive facial nerve stimulator device as a treatment of ischemic stroke.

  15. Facial transplantation: A concise update

    PubMed Central

    Barrera-Pulido, Fernando; Gomez-Cia, Tomas; Sicilia-Castro, Domingo; Garcia-Perla-Garcia, Alberto; Gacto-Sanchez, Purificacion; Hernandez-Guisado, Jose-Maria; Lagares-Borrego, Araceli; Narros-Gimenez, Rocio; Gonzalez-Padilla, Juan D.

    2013-01-01

    Objectives: Update on clinical results obtained by the first worldwide facial transplantation teams as well as review of the literature concerning the main surgical, immunological, ethical, and follow-up aspects described on facial transplanted patients. Study design: MEDLINE search of articles published on “face transplantation” until March 2012. Results: Eighteen clinical cases were studied. The mean patient age was 37.5 years, with a higher prevalence of men. Main surgical indication was gunshot injuries (6 patients). All patients had previously undergone multiple conventional surgical reconstructive procedures which had failed. Altogether 8 transplant teams belonging to 4 countries participated. Thirteen partial face transplantations and 5 full face transplantations have been performed. Allografts are varied according to face anatomical components and the amount of skin, muscle, bone, and other tissues included, though all were grafted successfully and remained viable without significant postoperative surgical complications. The patient with the longest follow-up was 5 years. Two patients died 2 and 27 months after transplantation. Conclusions: Clinical experience has demonstrated the feasibility of facial transplantation as a valuable reconstructive option, but it still remains considered as an experimental procedure with unresolved issues to settle down. Results show that from a clinical, technical, and immunological standpoint, facial transplantation has achieved functional, aesthetic, and social rehabilitation in severely facial disfigured patients. Key words:Face transplantation, composite tissue transplantation, face allograft, facial reconstruction, outcomes and complications of face transplantation. PMID:23229268

  16. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  17. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  18. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    PubMed Central

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  19. Sutural growth restriction and modern human facial evolution: an experimental study in a pig model

    PubMed Central

    Holton, Nathan E; Franciscus, Robert G; Nieves, Mary Ann; Marshall, Steven D; Reimer, Steven B; Southard, Thomas E; Keller, John C; Maddux, Scott D

    2010-01-01

    Facial size reduction and facial retraction are key features that distinguish modern humans from archaic Homo. In order to more fully understand the emergence of modern human craniofacial form, it is necessary to understand the underlying evolutionary basis for these defining characteristics. Although it is well established that the cranial base exerts considerable influence on the evolutionary and ontogenetic development of facial form, less emphasis has been placed on developmental factors intrinsic to the facial skeleton proper. The present analysis was designed to assess anteroposterior facial reduction in a pig model and to examine the potential role that this dynamic has played in the evolution of modern human facial form. Ten female sibship cohorts, each consisting of three individuals, were allocated to one of three groups. In the experimental group (n = 10), microplates were affixed bilaterally across the zygomaticomaxillary and frontonasomaxillary sutures at 2 months of age. The sham group (n = 10) received only screw implantation and the controls (n = 10) underwent no surgery. Following 4 months of post-surgical growth, we assessed variation in facial form using linear measurements and principal components analysis of Procrustes scaled landmarks. There were no differences between the control and sham groups; however, the experimental group exhibited a highly significant reduction in facial projection and overall size. These changes were associated with significant differences in the infraorbital region of the experimental group including the presence of an infraorbital depression and an inferiorly and coronally oriented infraorbital plane in contrast to a flat, superiorly and sagittally infraorbital plane in the control and sham groups. These altered configurations are markedly similar to important additional facial features that differentiate modern humans from archaic Homo, and suggest that facial length restriction via rigid plate fixation is a potentially useful model to assess the developmental factors that underlie changing patterns in craniofacial form associated with the emergence of modern humans. PMID:19929910

  20. Facial anthropometric differences among gender, ethnicity, and age groups.

    PubMed

    Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald

    2010-06-01

    The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational groups have facial features that differ significantly than those in construction. Statistically significant differences in facial anthropometric dimensions (P < 0.05) were noted between males and females, all racial/ethnic groups, and the subjects who were at least 45 years old when compared to workers between 18 and 29 years of age. These findings could be important to the design and manufacture of respirators, as well as employers responsible for supplying respiratory protective equipment to their employees.

  1. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    NASA Astrophysics Data System (ADS)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  2. The superficial temporal fat pad and its ramifications for temporalis muscle construction in facial approximation.

    PubMed

    Stephan, Carl N; Devine, Matthew

    2009-10-30

    The construction of the facial muscles (particularly those of mastication) is generally thought to enhance the accuracy of facial approximation methods because they increase attention paid to face anatomy. However, the lack of consideration for non-muscular structures of the face when using these "anatomical" methods ironically forces one of the two large masticatory muscles to be exaggerated beyond reality. To demonstrate and resolve this issue the temporal region of nineteen caucasoid human cadavers (10 females, 9 males; mean age=84 years, s=9 years, range=58-97 years) were investigated. Soft tissue depths were measured at regular intervals across the temporal fossa in 10 cadavers, and the thickness of the muscle and fat components quantified in nine other cadavers. The measurements indicated that the temporalis muscle generally accounts for <50% of the total soft tissue depth, and does not fill the entirety of the fossa (as generally known in the anatomical literature, but not as followed in facial approximation practice). In addition, a soft tissue bulge was consistently observed in the anteroinferior portion of the temporal fossa (as also evident in younger individuals), and during dissection, this bulge was found to closely correspond to the superficial temporal fat pad (STFP). Thus, the facial surface does not follow a simple undulating curve of the temporalis muscle as currently undertaken in facial approximation methods. New metric-based facial approximation guidelines are presented to facilitate accurate construction of the STFP and the temporalis muscle for future facial approximation casework. This study warrants further investigations of the temporalis muscle and the STFP in younger age groups and demonstrates that untested facial approximation guidelines, including those propounded to be anatomical, should be cautiously regarded.

  3. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  4. Robustness of Flexible Systems With Component-Level Uncertainties

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    2000-01-01

    Robustness of flexible systems in the presence of model uncertainties at the component level is considered. Specifically, an approach for formulating robustness of flexible systems in the presence of frequency and damping uncertainties at the component level is presented. The synthesis of the components is based on a modifications of a controls-based algorithm for component mode synthesis. The formulation deals first with robustness of synthesized flexible systems. It is then extended to deal with global (non-synthesized ) dynamic models with component-level uncertainties by projecting uncertainties from component levels to system level. A numerical example involving a two-dimensional simulated docking problem is worked out to demonstrate the feasibility of the proposed approach.

  5. A greater decline in female facial attractiveness during middle age reflects women’s loss of reproductive value

    PubMed Central

    Maestripieri, Dario; Klimczuk, Amanda C. E.; Traficonte, Daniel M.; Wilson, M. Claire

    2014-01-01

    Facial attractiveness represents an important component of an individual’s overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35–50 years and 51–65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources. PMID:24592253

  6. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    PubMed

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Functionally dissociated aspects in anterior and posterior electrocortical processing of facial threat.

    PubMed

    Schutter, Dennis J L G; de Haan, Edward H F; van Honk, Jack

    2004-06-01

    The angry facial expression is an important socially threatening stimulus argued to have evolved to regulate social hierarchies. In the present study, event-related potentials (ERP) were used to investigate the involvement and temporal dynamics of the frontal and parietal regions in the processing of angry facial expressions. Angry, happy and neutral faces were shown to eighteen healthy right-handed volunteers in a passive viewing task. Stimulus-locked ERPs were recorded from the frontal and parietal scalp sites. The P200, N300 and early contingent negativity variation (eCNV) components of the electric brain potentials were investigated. Analyses revealed statistical significant reductions in P200 amplitudes for the angry facial expression on both frontal and parietal electrode sites. Furthermore, apart from being strongly associated with the anterior P200, the N300 showed to be more negative for the angry facial expression in the anterior regions also. Finally, the eCNV was more pronounced over the parietal sites for the angry facial expressions. The present study demonstrated specific electrocortical correlates underlying the processing of angry facial expressions in the anterior and posterior brain sectors. The P200 is argued to indicate valence tagging by a fast and early detection mechanism. The lowered N300 with an anterior distribution for the angry facial expressions indicates more elaborate evaluation of stimulus relevance. The fact that the P200 and the N300 are highly correlated suggests that they reflect different stages of the same anterior evaluation mechanism. The more pronounced posterior eCNV suggests sustained attention to socially threatening information. Copyright 2004 Elsevier B.V.

  8. A greater decline in female facial attractiveness during middle age reflects women's loss of reproductive value.

    PubMed

    Maestripieri, Dario; Klimczuk, Amanda C E; Traficonte, Daniel M; Wilson, M Claire

    2014-01-01

    Facial attractiveness represents an important component of an individual's overall attractiveness as a potential mating partner. Perceptions of facial attractiveness are expected to vary with age-related changes in health, reproductive value, and power. In this study, we investigated perceptions of facial attractiveness, power, and personality in two groups of women of pre- and post-menopausal ages (35-50 years and 51-65 years, respectively) and two corresponding groups of men. We tested three hypotheses: (1) that perceived facial attractiveness would be lower for older than for younger men and women; (2) that the age-related reduction in facial attractiveness would be greater for women than for men; and (3) that for men, there would be a larger increase in perceived power at older ages. Eighty facial stimuli were rated by 60 (30 male, 30 female) middle-aged women and men using online surveys. Our three main hypotheses were supported by the data. Consistent with sex differences in mating strategies, the greater age-related decline in female facial attractiveness was driven by male respondents, while the greater age-related increase in male perceived power was driven by female respondents. In addition, we found evidence that some personality ratings were correlated with perceived attractiveness and power ratings. The results of this study are consistent with evolutionary theory and with previous research showing that faces can provide important information about characteristics that men and women value in a potential mating partner such as their health, reproductive value, and power or possession of resources.

  9. A robust two-way switching control system for remote piloting and stabilization of low-cost quadrotor UAVs

    NASA Astrophysics Data System (ADS)

    Ripamonti, Francesco; Resta, Ferruccio; Vivani, Andrea

    2015-04-01

    The aim of this paper is to present two control logics and an attitude estimator for UAV stabilization and remote piloting, that are as robust as possible to physical parameters variation and to other external disturbances. Moreover, they need to be implemented on low-cost micro-controllers, in order to be attractive for commercial drones. As an example, possible applications of the two switching control logics could be area surveillance and facial recognition by means of a camera mounted on the drone: the high computational speed logic is used to reach the target, when the high-stability one is activated, in order to complete the recognition tasks.

  10. Optimising ballistic facial coverage from military fragmenting munitions: a consensus statement.

    PubMed

    Breeze, J; Tong, D C; Powers, D; Martin, N A; Monaghan, A M; Evriviades, D; Combes, J; Lawton, G; Taylor, C; Kay, A; Baden, J; Reed, B; MacKenzie, N; Gibbons, A J; Heppell, S; Rickard, R F

    2017-02-01

    VIRTUS is the first United Kingdom (UK) military personal armour system to provide components that are capable of protecting the whole face from low velocity ballistic projectiles. Protection is modular, using a helmet worn with ballistic eyewear, a visor, and a mandibular guard. When all four components are worn together the face is completely covered, but the heat, discomfort, and weight may not be optimal in all types of combat. We organized a Delphi consensus group analysis with 29 military consultant surgeons from the UK, United States, Canada, Australia, and New Zealand to identify a potential hierarchy of functional facial units in order of importance that require protection. We identified the causes of those facial injuries that are hardest to reconstruct, and the most effective combinations of facial protection. Protection is required from both penetrating projectiles and burns. There was strong consensus that blunt injury to the facial skeleton was currently not a military priority. Functional units that should be prioritised are eyes and eyelids, followed consecutively by the nose, lips, and ears. Twenty-nine respondents felt that the visor was more important than the mandibular guard if only one piece was to be worn. Essential cover of the brain and eyes is achieved from all directions using a combination of helmet and visor. Nasal cover currently requires the mandibular guard unless the visor can be modified to cover it as well. Any such prototype would need extensive ergonomics and assessment of integration, as any changes would have to be acceptable to the people who wear them in the long term. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  11. Neurons in the human amygdala encode face identity, but not gaze direction.

    PubMed

    Mormann, Florian; Niediek, Johannes; Tudusciuc, Oana; Quesada, Carlos M; Coenen, Volker A; Elger, Christian E; Adolphs, Ralph

    2015-11-01

    The amygdala is important for face processing, and direction of eye gaze is one of the most socially salient facial signals. Recording from over 200 neurons in the amygdala of neurosurgical patients, we found robust encoding of the identity of neutral-expression faces, but not of their direction of gaze. Processing of gaze direction may rely on a predominantly cortical network rather than the amygdala.

  12. Simultaneous Monitoring of Ballistocardiogram and Photoplethysmogram Using Camera

    PubMed Central

    Shao, Dangdang; Tsow, Francis; Liu, Chenbin; Yang, Yuting; Tao, Nongjian

    2017-01-01

    We present a noncontact method to measure Ballistocardiogram (BCG) and Photoplethysmogram (PPG) simultaneously using a single camera. The method tracks the motion of facial features to determine displacement BCG, and extracts the corresponding velocity and acceleration BCGs by taking first and second temporal derivatives from the displacement BCG, respectively. The measured BCG waveforms are consistent with those reported in literature and also with those recorded with an accelerometer-based reference method. The method also tracks PPG based on the reflected light from the same facial region, which makes it possible to track both BCG and PPG with the same optics. We verify the robustness and reproducibility of the noncontact method with a small pilot study with 23 subjects. The presented method is the first demonstration of simultaneous BCG and PPG monitoring without wearing any extra equipment or marker by the subject. PMID:27362754

  13. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    PubMed

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on the specific movement required. Results are discussed in relation to the Basic Emotion Theory and embodied cognition framework.

  14. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  15. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  16. Self-adaptive signals separation for non-contact heart rate estimation from facial video in realistic environments.

    PubMed

    Liu, Xuenan; Yang, Xuezhi; Jin, Jing; Li, Jiangshan

    2018-06-05

    Recent researches indicate that facial epidermis color varies with the rhythm of heat beats. It can be captured by consumer-level cameras and, astonishingly, be adopted to estimate heart rate (HR). The HR estimated remains not as precise as required in practical environment where illumination interference, facial expressions, or motion artifacts are involved, though numerous methods have been proposed in the last few years. A novel algorithm is proposed to make non-contact HR estimation technique more robust. First, the face of subject is detected and tracked to follow the head movement. The facial region then falls into several blocks, and the chrominance feature of each block is extracted to establish raw HR sub-signal. Self-adaptive signals separation (SASS) is performed to separate the noiseless HR sub-signals from raw sub-signals. On that basis, the noiseless sub-signals full of HR information are selected using weight-based scheme to establish the holistic HR signal, from which average HR is computed adopting wavelet transform and data filter. Forty subjects take part in our experiments, whose facial videos are recorded by a normal webcam with the frame rate of 30 fps under ambient lighting conditions. The average HR estimated by our method correlates strongly with ground truth measurements, as indicated in experimental results measured in static scenario with the Pearson's correlation r=0.980 and dynamic scenario with the Pearson's correlation r=0.897. Our method, compared to the newest method, decreases the error rate by 38.63% and increases the Pearson's correlation by 15.59%, indicating that our method evidently outperforms state-of-the-art non-contact HR estimation methods in realistic environments. © 2018 Institute of Physics and Engineering in Medicine.

  17. Human homogamy in facial characteristics: does a sexual-imprinting-like mechanism play a role?

    PubMed

    Nojo, Saori; Tamura, Satoshi; Ihara, Yasuo

    2012-09-01

    Human homogamy may be caused in part by individuals' preference for phenotypic similarities. Two types of preference can result in homogamy: individuals may prefer someone who is similar to themselves (self-referent phenotype matching) or to their parents (a sexual-imprinting-like mechanism). In order to examine these possibilities, we compare faces of couples and their family members in two ways. First, "perceived" similarity between a pair of faces is quantified as similarity ratings given to the pair. Second, "physical" similarity between two groups of faces is evaluated on the basis of correlations in principal component scores generated from facial measurements. Our results demonstrate a tendency to homogamy in facial characteristics and suggest that the tendency is due primarily to self-referent phenotype matching. Nevertheless, the presence of a sexual-imprinting-like effect is also partially indicated: whether individuals are involved in facial homogamy may be affected by their relationship with their parents during childhood.

  18. The masculinity paradox: facial masculinity and beardedness interact to determine women's ratings of men's facial attractiveness.

    PubMed

    Dixson, B J W; Sulikowski, D; Gouda-Vossos, A; Rantala, M J; Brooks, R C

    2016-11-01

    In many species, male secondary sexual traits have evolved via female choice as they confer indirect (i.e. genetic) benefits or direct benefits such as enhanced fertility or survival. In humans, the role of men's characteristically masculine androgen-dependent facial traits in determining men's attractiveness has presented an enduring paradox in studies of human mate preferences. Male-typical facial features such as a pronounced brow ridge and a more robust jawline may signal underlying health, whereas beards may signal men's age and masculine social dominance. However, masculine faces are judged as more attractive for short-term relationships over less masculine faces, whereas beards are judged as more attractive than clean-shaven faces for long-term relationships. Why such divergent effects occur between preferences for two sexually dimorphic traits remains unresolved. In this study, we used computer graphic manipulation to morph male faces varying in facial hair from clean-shaven, light stubble, heavy stubble and full beards to appear more (+25% and +50%) or less (-25% and -50%) masculine. Women (N = 8520) were assigned to treatments wherein they rated these stimuli for physical attractiveness in general, for a short-term liaison or a long-term relationship. Results showed a significant interaction between beardedness and masculinity on attractiveness ratings. Masculinized and, to an even greater extent, feminized faces were less attractive than unmanipulated faces when all were clean-shaven, and stubble and beards dampened the polarizing effects of extreme masculinity and femininity. Relationship context also had effects on ratings, with facial hair enhancing long-term, and not short-term, attractiveness. Effects of facial masculinization appear to have been due to small differences in the relative attractiveness of each masculinity level under the three treatment conditions and not to any change in the order of their attractiveness. Our findings suggest that beardedness may be attractive when judging long-term relationships as a signal of intrasexual formidability and the potential to provide direct benefits to females. More generally, our results hint at a divergence of signalling function, which may result in a subtle trade-off in women's preferences, for two highly sexually dimorphic androgen-dependent facial traits. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  19. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    PubMed

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Constriction of the buccal branch of the facial nerve produces unilateral craniofacial allodynia.

    PubMed

    Lewis, Susannah S; Grace, Peter M; Hutchinson, Mark R; Maier, Steven F; Watkins, Linda R

    2017-08-01

    Despite pain being a sensory experience, studies of spinal cord ventral root damage have demonstrated that motor neuron injury can induce neuropathic pain. Whether injury of cranial motor nerves can also produce nociceptive hypersensitivity has not been addressed. Herein, we demonstrate that chronic constriction injury (CCI) of the buccal branch of the facial nerve results in long-lasting, unilateral allodynia in the rat. An anterograde and retrograde tracer (3000MW tetramethylrhodamine-conjugated dextran) was not transported to the trigeminal ganglion when applied to the injury site, but was transported to the facial nucleus, indicating that this nerve branch is not composed of trigeminal sensory neurons. Finally, intracisterna magna injection of interleukin-1 (IL-1) receptor antagonist reversed allodynia, implicating the pro-inflammatory cytokine IL-1 in the maintenance of neuropathic pain induced by facial nerve CCI. These data extend the prior evidence that selective injury to motor axons can enhance pain to supraspinal circuits by demonstrating that injury of a facial nerve with predominantly motor axons is sufficient for neuropathic pain, and that the resultant pain has a neuroimmune component. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. [Hemodynamic activities in children with autism while imitating emotional facial expressions: a near-infrared spectroscopy study].

    PubMed

    Mori, Kenji; Mori, Tatsuo; Goji, Aya; Ito, Hiromichi; Toda, Yoshihiro; Fujii, Emiko; Miyazaki, Masahito; Harada, Masafumi; Kagami, Shoji

    2014-07-01

    To examine the hemodynamic activities in the frontal lobe, children with autistic disorder and matched controls underwent near-infrared spectroscopy (NIRS) while imitating emotional facial expressions. The subjects consisted of 10 boys with autistic disorder without mental retardation (9 - 14 years) and 10 normally developing boys (9 - 14 years). The concentrations of oxyhemoglobin (oxy-Hb) were measured with frontal probes using a 34-channel NIRS machine while the subjects imitated emotional facial expressions. The increments in the concentration of oxy-Hb in the pars opercularis of the inferior frontal gyrus in autistic subjects were significantly lower than those in the controls. However, the concentrations of oxy-Hb in this area were significantly elevated in autistic subjects after they were trained to imitate emotional facial expressions. The increments in the concentration of oxy-Hb in this area in autistic subjects were positively correlated with the scores on a test of labeling emotional facial expressions. The pars opercularis of the inferior frontal gyrus is an important component of the mirror neuron system. The present results suggest that mirror neurons could be activated by repeated imitation in children with autistic disorder.

  2. Gender differences in memory processing of female facial attractiveness: evidence from event-related potentials.

    PubMed

    Zhang, Yan; Wei, Bin; Zhao, Peiqiong; Zheng, Minxiao; Zhang, Lili

    2016-06-01

    High rates of agreement in the judgment of facial attractiveness suggest universal principles of beauty. This study investigated gender differences in recognition memory processing of female facial attractiveness. Thirty-four Chinese heterosexual participants (17 females, 17 males) aged 18-24 years (mean age 21.63 ± 1.51 years) participated in the experiment which used event-related potentials (ERPs) based on a study-test paradigm. The behavioral data results showed that both men and women had significantly higher accuracy rates for attractive faces than for unattractive faces, but men reacted faster to unattractive faces. Gender differences on ERPs showed that attractive faces elicited larger early components such as P1, N170, and P2 in men than in women. The results indicated that the effects of recognition bias during memory processing modulated by female facial attractiveness are greater for men than women. Behavioral and ERP evidences indicate that men and women differ in their attentional adhesion to attractive female faces; different mating-related motives may guide the selective processing of attractive men and women. These findings establish a contribution of gender differences on female facial attractiveness during memory processing from an evolutionary perspective.

  3. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    PubMed Central

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque

    2018-01-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845

  4. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    PubMed

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  5. Affect of the unconscious: visually suppressed angry faces modulate our decisions.

    PubMed

    Almeida, Jorge; Pajtas, Petra E; Mahon, Bradford Z; Nakayama, Ken; Caramazza, Alfonso

    2013-03-01

    Emotional and affective processing imposes itself over cognitive processes and modulates our perception of the surrounding environment. In two experiments, we addressed the issue of whether nonconscious processing of affect can take place even under deep states of unawareness, such as those induced by interocular suppression techniques, and can elicit an affective response that can influence our understanding of the surrounding environment. In Experiment 1, participants judged the likeability of an unfamiliar item--a Chinese character--that was preceded by a face expressing a particular emotion (either happy or angry). The face was rendered invisible through an interocular suppression technique (continuous flash suppression; CFS). In Experiment 2, backward masking (BM), a less robust masking technique, was used to render the facial expressions invisible. We found that despite equivalent phenomenological suppression of the visual primes under CFS and BM, different patterns of affective processing were obtained with the two masking techniques. Under BM, nonconscious affective priming was obtained for both happy and angry invisible facial expressions. However, under CFS, nonconscious affective priming was obtained only for angry facial expressions. We discuss an interpretation of this dissociation between affective processing and visual masking techniques in terms of distinct routes from the retina to the amygdala.

  6. Uncovering gender discrimination cues in a realistic setting.

    PubMed

    Dupuis-Roy, Nicolas; Fortin, Isabelle; Fiset, Daniel; Gosselin, Frédéric

    2009-02-10

    Which face cues do we use for gender discrimination? Few studies have tried to answer this question and the few that have tried typically used only a small set of grayscale stimuli, often distorted and presented a large number of times. Here, we reassessed the importance of facial cues for gender discrimination in a more realistic setting. We applied Bubbles-a technique that minimizes bias toward specific facial features and does not necessitate the distortion of stimuli-to a set of 300 color photographs of Caucasian faces, each presented only once to 30 participants. Results show that the region of the eyes and the eyebrows-probably in the light-dark channel-is the most important facial cue for accurate gender discrimination; and that the mouth region is driving fast correct responses (but not fast incorrect responses)-the gender discrimination information in the mouth region is concentrated in the red-green color channel. Together, these results suggest that, when color is informative in the mouth region, humans use it and respond rapidly; and, when it's not informative, they have to rely on the more robust but more sluggish luminance information in the eye-eyebrow region.

  7. Sex differences in facial emotion perception ability across the lifespan.

    PubMed

    Olderbak, Sally; Wilhelm, Oliver; Hildebrandt, Andrea; Quoidbach, Jordi

    2018-03-22

    Perception of emotion in the face is a key component of human social cognition and is considered vital for many domains of life; however, little is known about how this ability differs across the lifespan for men and women. We addressed this question with a large community sample (N = 100,257) of persons ranging from younger than 15 to older than 60 years of age. Participants were viewers of the television show "Tout le Monde Joue", and the task was presented on television, with participants responding via their mobile devices. Applying latent variable modeling, and establishing measurement invariance between males and females and across age, we found that, for both males and females, emotion perception abilities peak between the ages of 15 and 30, with poorer performance by younger adults and declining performance after the age of 30. In addition, we show a consistent advantage by females across the lifespan, which decreases in magnitude with increasing age. This large scale study with a wide range of people and testing environments suggests these effects are largely robust. Implications are discussed.

  8. Subject-specific and pose-oriented facial features for face recognition across poses.

    PubMed

    Lee, Ping-Han; Hsu, Gee-Sern; Wang, Yun-Wen; Hung, Yi-Ping

    2012-10-01

    Most face recognition scenarios assume that frontal faces or mug shots are available for enrollment to the database, faces of other poses are collected in the probe set. Given a face from the probe set, one needs to determine whether a match in the database exists. This is under the assumption that in forensic applications, most suspects have their mug shots available in the database, and face recognition aims at recognizing the suspects when their faces of various poses are captured by a surveillance camera. This paper considers a different scenario: given a face with multiple poses available, which may or may not include a mug shot, develop a method to recognize the face with poses different from those captured. That is, given two disjoint sets of poses of a face, one for enrollment and the other for recognition, this paper reports a method best for handling such cases. The proposed method includes feature extraction and classification. For feature extraction, we first cluster the poses of each subject's face in the enrollment set into a few pose classes and then decompose the appearance of the face in each pose class using Embedded Hidden Markov Model, which allows us to define a set of subject-specific and pose-priented (SSPO) facial components for each subject. For classification, an Adaboost weighting scheme is used to fuse the component classifiers with SSPO component features. The proposed method is proven to outperform other approaches, including a component-based classifier with local facial features cropped manually, in an extensive performance evaluation study.

  9. Facial attractiveness.

    PubMed

    Little, Anthony C

    2014-11-01

    Facial attractiveness has important social consequences. Despite a widespread belief that beauty cannot be defined, in fact, there is considerable agreement across individuals and cultures on what is found attractive. By considering that attraction and mate choice are critical components of evolutionary selection, we can better understand the importance of beauty. There are many traits that are linked to facial attractiveness in humans and each may in some way impart benefits to individuals who act on their preferences. If a trait is reliably associated with some benefit to the perceiver, then we would expect individuals in a population to find that trait attractive. Such an approach has highlighted face traits such as age, health, symmetry, and averageness, which are proposed to be associated with benefits and so associated with facial attractiveness. This view may postulate that some traits will be universally attractive; however, this does not preclude variation. Indeed, it would be surprising if there existed a template of a perfect face that was not affected by experience, environment, context, or the specific needs of an individual. Research on facial attractiveness has documented how various face traits are associated with attractiveness and various factors that impact on an individual's judgments of facial attractiveness. Overall, facial attractiveness is complex, both in the number of traits that determine attraction and in the large number of factors that can alter attraction to particular faces. A fuller understanding of facial beauty will come with an understanding of how these various factors interact with each other. WIREs Cogn Sci 2014, 5:621-634. doi: 10.1002/wcs.1316 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. © 2014 John Wiley & Sons, Ltd.

  10. Age Differences in the Complexity of Emotion Perception.

    PubMed

    Kim, Seungyoun; Geren, Jennifer L; Knight, Bob G

    2015-01-01

    The current study examined age differences in the number of emotion components used in the judgment of emotion from facial expressions. Fifty-eight younger and 58 older adults were compared on the complexity of perception of emotion from standardized facial expressions that were either clear or ambiguous exemplars of emotion. Using an intra-individual factor analytic approach, results showed that older adults used more emotion components in perceiving emotion in faces than younger adults. Both age groups reported greater emotional complexity for the clear and prototypical emotional stimuli. Age differences in emotional complexity were more pronounced for the ambiguous expressions compared with the clear expressions. These findings demonstrate that older adults showed increased elaboration of emotion, particularly when emotion cues were subtle and provide support for greater emotion differentiation in older adulthood.

  11. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  12. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  13. Depth Structure from Asymmetric Shading Supports Face Discrimination

    PubMed Central

    Chen, Chien-Chung; Chen, Chin-Mei; Tyler, Christopher W.

    2013-01-01

    To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component. PMID:23457484

  14. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder.

    PubMed

    Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can

    2017-01-01

    Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated backward masking paradigm was used to investigate whether individuals with Internet Gaming Disorder (IGD) and normal controls (NC) exhibit different patterns in facial expression processing.• The results demonstrated that individuals with IGD respond differently to facial expressions compared with NC on a preattentive level. Behaviorally, individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results further showed (1) decreased amplitudes in the N170 component (an index of early face processing) in individuals with IGD when they process neutral expressions compared with happy expressions in the happy-neutral expressions context, whereas the NC exhibited comparable N170 amplitudes in response to these two expressions; (2) both the IGD and NC group demonstrated similar N170 amplitudes in response to sad and neutral faces in the sad-neutral expressions context.• The decreased amplitudes of N170 to neutral faces than happy faces in individuals with IGD might due to their less expectancies for neutral content in the happy-neutral expressions context, while individuals with IGD may have no different expectancies for neutral and sad faces in the sad-neutral expressions context.

  15. Is ecstasy an "empathogen"? Effects of ±3,4-methylenedioxymethamphetamine on prosocial feelings and identification of emotional states in others.

    PubMed

    Bedi, Gillinder; Hyman, David; de Wit, Harriet

    2010-12-15

    Users of ±3,4-methylenedioxymethamphetamine (MDMA), "ecstasy," report that the drug produces unusual psychological effects, including increased empathy and prosocial feelings. These "empathogenic" effects are cited as reasons for recreational ecstasy use and also form the basis for the proposed use of MDMA in psychotherapy. However, they have yet to be characterized in controlled studies. Here, we investigate effects of MDMA on an important social cognitive capacity, the identification of emotional expression in others, and on socially relevant mood states. Over four sessions, healthy ecstasy-using volunteers (n = 21) received MDMA (.75, 1.5 mg/kg), methamphetamine (METH) (20 mg), and placebo under double-blind, randomized conditions. They completed self-report ratings of relevant affective states and undertook tasks in which they identified emotions from images of faces, pictures of eyes, and vocal cues. MDMA (1.5 mg/kg) significantly increased ratings of feeling "loving" and "friendly", and MDMA (.75 mg/kg) increased "loneliness". Both MDMA (1.5 mg/kg) and METH increased "playfulness"; only METH increased "sociability". MDMA (1.5 mg/kg) robustly decreased accuracy of facial fear recognition relative to placebo. The drug MDMA increased "empathogenic" feelings but reduced accurate identification of threat-related facial emotional signals in others, findings consistent with increased social approach behavior rather than empathy. This effect of MDMA on social cognition has implications for both recreational and therapeutic use. In recreational users, acute drug effects might alter social risk-taking while intoxicated. Socioemotional processing alterations such as those documented here might underlie possible psychotherapeutic benefits of this drug; further investigation of such mechanisms could inform treatment design to maximize active components of MDMA-assisted psychotherapy. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  16. Assessment of Emotional Expressions after Full-Face Transplantation.

    PubMed

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Barçın, Nur Ebru; Tombak, Kadriye; Çolak, Ömer Halil

    2017-01-01

    We assessed clinical features as well as sensory and motor recoveries in 3 full-face transplantation patients. A frequency analysis was performed on facial surface electromyography data collected during 6 basic emotional expressions and 4 primary facial movements. Motor progress was assessed using the wavelet packet method by comparison against the mean results obtained from 10 healthy subjects. Analyses were conducted on 1 patient at approximately 1 year after face transplantation and at 2 years after transplantation in the remaining 2 patients. Motor recovery was observed following sensory recovery in all 3 patients; however, the 3 cases had different backgrounds and exhibited different degrees and rates of sensory and motor improvements after transplant. Wavelet packet energy was detected in all patients during emotional expressions and primary movements; however, there were fewer active channels during expressions in transplant patients compared to healthy individuals, and patterns of wavelet packet energy were different for each patient. Finally, high-frequency components were typically detected in patients during emotional expressions, but fewer channels demonstrated these high-frequency components in patients compared to healthy individuals. Our data suggest that the posttransplantation recovery of emotional facial expression requires neural plasticity.

  17. Early adverse experiences and the neurobiology of facial emotion processing.

    PubMed

    Moulson, Margaret C; Fox, Nathan A; Zeanah, Charles H; Nelson, Charles A

    2009-01-01

    To examine the neurobiological consequences of early institutionalization, the authors recorded event-related potentials (ERPs) from 3 groups of Romanian children--currently institutionalized, previously institutionalized but randomly assigned to foster care, and family-reared children--in response to pictures of happy, angry, fearful, and sad facial expressions of emotion. At 3 assessments (baseline, 30 months, and 42 months), institutionalized children showed markedly smaller amplitudes and longer latencies for the occipital components P1, N170, and P400 compared to family-reared children. By 42 months, ERP amplitudes and latencies of children placed in foster care were intermediate between the institutionalized and family-reared children, suggesting that foster care may be partially effective in ameliorating adverse neural changes caused by institutionalization. The age at which children were placed into foster care was unrelated to their ERP outcomes at 42 months. Facial emotion processing was similar in all 3 groups of children; specifically, fearful faces elicited larger amplitude and longer latency responses than happy faces for the frontocentral components P250 and Nc. These results have important implications for understanding of the role that experience plays in shaping the developing brain.

  18. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  19. An Automatic Registration Algorithm for 3D Maxillofacial Model

    NASA Astrophysics Data System (ADS)

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng

    2016-09-01

    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  20. The Vividness of Happiness in Dynamic Facial Displays of Emotion

    PubMed Central

    Becker, D. Vaughn; Neel, Rebecca; Srinivasan, Narayanan; Neufeld, Samantha; Kumar, Devpriya; Fouse, Shannon

    2012-01-01

    Rapid identification of facial expressions can profoundly affect social interactions, yet most research to date has focused on static rather than dynamic expressions. In four experiments, we show that when a non-expressive face becomes expressive, happiness is detected more rapidly anger. When the change occurs peripheral to the focus of attention, however, dynamic anger is better detected when it appears in the left visual field (LVF), whereas dynamic happiness is better detected in the right visual field (RVF), consistent with hemispheric differences in the processing of approach- and avoidance-relevant stimuli. The central advantage for happiness is nevertheless the more robust effect, persisting even when information of either high or low spatial frequency is eliminated. Indeed, a survey of past research on the visual search for emotional expressions finds better support for a happiness detection advantage, and the explanation may lie in the coevolution of the signal and the receiver. PMID:22247755

  1. Behavioral and Neural Adaptation in Approach Behavior.

    PubMed

    Wang, Shuo; Falvello, Virginia; Porter, Jenny; Said, Christopher P; Todorov, Alexander

    2018-06-01

    People often make approachability decisions based on perceived facial trustworthiness. However, it remains unclear how people learn trustworthiness from a population of faces and whether this learning influences their approachability decisions. Here we investigated the neural underpinning of approach behavior and tested two important hypotheses: whether the amygdala adapts to different trustworthiness ranges and whether the amygdala is modulated by task instructions and evaluative goals. We showed that participants adapted to the stimulus range of perceived trustworthiness when making approach decisions and that these decisions were further modulated by the social context. The right amygdala showed both linear response and quadratic response to trustworthiness level, as observed in prior studies. Notably, the amygdala's response to trustworthiness was not modulated by stimulus range or social context, a possible neural dynamic adaptation. Together, our data have revealed a robust behavioral adaptation to different trustworthiness ranges as well as a neural substrate underlying approach behavior based on perceived facial trustworthiness.

  2. Joint sparse learning for 3-D facial expression generation.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Bu, Jiajun

    2013-08-01

    3-D facial expression generation, including synthesis and retargeting, has received intensive attentions in recent years, because it is important to produce realistic 3-D faces with specific expressions in modern film production and computer games. In this paper, we present joint sparse learning (JSL) to learn mapping functions and their respective inverses to model the relationship between the high-dimensional 3-D faces (of different expressions and identities) and their corresponding low-dimensional representations. Based on JSL, we can effectively and efficiently generate various expressions of a 3-D face by either synthesizing or retargeting. Furthermore, JSL is able to restore 3-D faces with holes by learning a mapping function between incomplete and intact data. Experimental results on a wide range of 3-D faces demonstrate the effectiveness of the proposed approach by comparing with representative ones in terms of quality, time cost, and robustness.

  3. Age-related differences in morphological characteristics of residual skin surface components collected from the surface of facial skin of healthy male volunteers.

    PubMed

    Chalyk, N E; Bandaletova, T Y; Kyle, N H; Petyaev, I M

    2017-05-01

    Global increase of human longevity results in the emergence of previously ignored ageing-related problems. Skin ageing is a well-known phenomenon, but active search for scientific approaches to its prevention and even skin rejuvenation is a relatively new area. Although the structure and composition of the stratum corneum (SC), the superficial layer of epidermis, is well studied, relatively little is known about the residual skin surface components (RSSC) that overlay the surface of the SC. The aim of this study was to examine morphological features of RSSC samples non-invasively collected from the surface of human facial skin for the presence of age-related changes. Residual skin surface component samples were collected by swabbing from the surface of facial skin of 60 adult male volunteers allocated in two age groups: 34 subjects aged in the range 18-32 years and 26 subjects aged in the range 58-72 years. The collected samples were analysed microscopically: the size of the lipid droplets was measured; desquamated corneocytes and lipid crystals were counted; and microbial presence was assessed semi-quantitatively. Age-related changes were revealed for all studied components of the RSSC. There was a significant (P = 0.0126) decrease in the size of lipid droplets among older men. Likewise, significantly (P = 0.0252) lower numbers of lipid crystals were present in this group. In contrast, microbial presence in the RSSC was significantly (P = 0.0019) increased in the older group. There was also a trend towards more abundant corneocyte desquamation among older men, but the difference has not reached statistical significance (P = 0.0636). Non-invasively collected RSSC samples present an informative material for studying age-related changes on the surface of the SC of human facial skin. The results of this study confirm earlier observations regarding age-associated decline of the efficiency of the epidermal barrier and can be used for testing new approaches to skin ageing prevention. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Performance enhancement for audio-visual speaker identification using dynamic facial muscle model.

    PubMed

    Asadpour, Vahid; Towhidkhah, Farzad; Homayounpour, Mohammad Mehdi

    2006-10-01

    Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.

  5. Unified framework for automated iris segmentation using distantly acquired face images.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  6. Characterization of small-to-medium head-and-face dimensions for developing respirator fit test panels and evaluating fit of filtering facepiece respirators with different faceseal design

    PubMed Central

    Lin, Yi-Chun

    2017-01-01

    A respirator fit test panel (RFTP) with facial size distribution representative of intended users is essential to the evaluation of respirator fit for new models of respirators. In this study an anthropometric survey was conducted among youths representing respirator users in mid-Taiwan to characterize head-and-face dimensions key to RFTPs for application to small-to-medium facial features. The participants were fit-tested for three N95 masks of different facepiece design and the results compared to facial size distribution specified in the RFTPs of bivariate and principal component analysis design developed in this study to realize the influence of facial characteristics to respirator fit in relation to facepiece design. Nineteen dimensions were measured for 206 participants. In fit testing the qualitative fit test (QLFT) procedures prescribed by the U.S. Occupational Safety and Health Administration were adopted. As the results show, the bizygomatic breadth of the male and female participants were 90.1 and 90.8% of their counterparts reported for the U.S. youths (P < 0.001), respectively. Compared to the bivariate distribution, the PCA design better accommodated variation in facial contours among different respirator user groups or populations, with the RFTPs reported in this study and from literature consistently covering over 92% of the participants. Overall, the facial fit of filtering facepieces increased with increasing facial dimensions. The total percentages of the tests wherein the final maneuver being completed was “Moving head up-and-down”, “Talking” or “Bending over” in bivariate and PCA RFTPs were 13.3–61.9% and 22.9–52.8%, respectively. The respirators with a three-panel flat fold structured in the facepiece provided greater fit, particularly when the users moved heads. When the facial size distribution in a bivariate RFTP did not sufficiently represent petite facial size, the fit testing was inclined to overestimate the general fit, thus for small-to-medium facial dimensions a distinct RFTP should be considered. PMID:29176833

  7. Characterization of small-to-medium head-and-face dimensions for developing respirator fit test panels and evaluating fit of filtering facepiece respirators with different faceseal design.

    PubMed

    Lin, Yi-Chun; Chen, Chen-Peng

    2017-01-01

    A respirator fit test panel (RFTP) with facial size distribution representative of intended users is essential to the evaluation of respirator fit for new models of respirators. In this study an anthropometric survey was conducted among youths representing respirator users in mid-Taiwan to characterize head-and-face dimensions key to RFTPs for application to small-to-medium facial features. The participants were fit-tested for three N95 masks of different facepiece design and the results compared to facial size distribution specified in the RFTPs of bivariate and principal component analysis design developed in this study to realize the influence of facial characteristics to respirator fit in relation to facepiece design. Nineteen dimensions were measured for 206 participants. In fit testing the qualitative fit test (QLFT) procedures prescribed by the U.S. Occupational Safety and Health Administration were adopted. As the results show, the bizygomatic breadth of the male and female participants were 90.1 and 90.8% of their counterparts reported for the U.S. youths (P < 0.001), respectively. Compared to the bivariate distribution, the PCA design better accommodated variation in facial contours among different respirator user groups or populations, with the RFTPs reported in this study and from literature consistently covering over 92% of the participants. Overall, the facial fit of filtering facepieces increased with increasing facial dimensions. The total percentages of the tests wherein the final maneuver being completed was "Moving head up-and-down", "Talking" or "Bending over" in bivariate and PCA RFTPs were 13.3-61.9% and 22.9-52.8%, respectively. The respirators with a three-panel flat fold structured in the facepiece provided greater fit, particularly when the users moved heads. When the facial size distribution in a bivariate RFTP did not sufficiently represent petite facial size, the fit testing was inclined to overestimate the general fit, thus for small-to-medium facial dimensions a distinct RFTP should be considered.

  8. Photo-anthropometric study on face among Garo adult females of Bangladesh.

    PubMed

    Akhter, Z; Banu, M L A; Alam, M M; Hossain, S; Nazneen, M

    2013-08-01

    Facial anthropometry has well-known implications in health-related fields. Measurement of human face is used in identification of person in Forensic medicine, Plastic surgery, Orthodontics, Archeology, Hair-style design and examination of the differences between races and ethnicities. Facial anthropometry provides an indication of the variations in facial shape in a specified population. Bangladesh harbours many cultures and people of different races because of the colonial rules of the past regimes. Standards based on ethnic or racial data are desirable because these standards reflect the potentially different patterns of craniofacial growth resulting from racial, ethnic and sexual differences. In the above context, the present study was attempted to establish ethnic specific anthropometric data for the Christian Garo adult females of Bangladesh. The study was an observational, cross-sectional and primarily descriptive in nature with some analytical components and it was carried out with a total number of 100 Christian Garo adult females aged between 25-45 years. Three vertical facial dimensions such as facial height from 'trichion' to 'gnathion', nasal length and total vermilion height were measured by photographic method. Though these measurements were taken by photographic method but they were converted into actual size using one of the physically measured variables between two angles of the mouth (chilion to chilion). The data were then statistically analyzed by computation to find out its normatic value. The study also observed the possible 'correlation' between the facial height from 'trichion' to 'gnathion' with nasal length and total vermilion height. Multiplication factors were estimated for estimating facial height from nasal length and total vermilion height. Comparison were made between 'estimated' values with the 'measured' values by using't' test. The mean (+/- SD) of nasal length and total vermilion height were 4.53 +/- 0.36 cm and 1.63 +/- 0.23 cm respectively and the mean (+/- SD) of facial height from 'trichion' to 'gnathion' was 16.88 +/- 1.11 cm. Nasal length and total vermilion height showed also a significant positive correlation with facial height from 'trichion' to 'gnathion'. No significant difference was found between the 'measured' and 'estimated' facial height from 'trichion' to 'gnathion' for nasal length and total vermilion height.

  9. SNR-adaptive stream weighting for audio-MES ASR.

    PubMed

    Lee, Ki-Seung

    2008-08-01

    Myoelectric signals (MESs) from the speaker's mouth region have been successfully shown to improve the noise robustness of automatic speech recognizers (ASRs), thus promising to extend their usability in implementing noise-robust ASR. In the recognition system presented herein, extracted audio and facial MES features were integrated by a decision fusion method, where the likelihood score of the audio-MES observation vector was given by a linear combination of class-conditional observation log-likelihoods of two classifiers, using appropriate weights. We developed a weighting process adaptive to SNRs. The main objective of the paper involves determining the optimal SNR classification boundaries and constructing a set of optimum stream weights for each SNR class. These two parameters were determined by a method based on a maximum mutual information criterion. Acoustic and facial MES data were collected from five subjects, using a 60-word vocabulary. Four types of acoustic noise including babble, car, aircraft, and white noise were acoustically added to clean speech signals with SNR ranging from -14 to 31 dB. The classification accuracy of the audio ASR was as low as 25.5%. Whereas, the classification accuracy of the MES ASR was 85.2%. The classification accuracy could be further improved by employing the proposed audio-MES weighting method, which was as high as 89.4% in the case of babble noise. A similar result was also found for the other types of noise.

  10. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  12. Rapid processing of emotional expressions without conscious awareness.

    PubMed

    Smith, Marie L

    2012-08-01

    Rapid accurate categorization of the emotional state of our peers is of critical importance and as such many have proposed that facial expressions of emotion can be processed without conscious awareness. Typically, studies focus selectively on fearful expressions due to their evolutionary significance, leaving the subliminal processing of other facial expressions largely unexplored. Here, I investigated the time course of processing of 3 facial expressions (fearful, disgusted, and happy) plus an emotionally neutral face, during objectively unaware and aware perception. Participants completed the challenging "which expression?" task in response to briefly presented backward-masked expressive faces. Although participant's behavioral responses did not differentiate between the emotional content of the stimuli in the unaware condition, activity over frontal and occipitotemporal (OT) brain regions indicated an emotional modulation of the neuronal response. Over frontal regions this was driven by negative facial expressions and was present on all emotional trials independent of later categorization. Whereas the N170 component, recorded on lateral OT electrodes, was enhanced for all facial expressions but only on trials that would later be categorized as emotional. The results indicate that emotional faces, not only fearful, are processed without conscious awareness at an early stage and highlight the critical importance of considering categorization response when studying subliminal perception.

  13. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  14. Face-body integration of intense emotional expressions of victory and defeat.

    PubMed

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory.

  15. Face-body integration of intense emotional expressions of victory and defeat

    PubMed Central

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory. PMID:28245245

  16. Neural measures of the role of affective prosody in empathy for pain.

    PubMed

    Meconi, Federica; Doro, Mattia; Lomoriello, Arianna Schiano; Mastrella, Giulia; Sessa, Paola

    2018-01-10

    Emotional communication often needs the integration of affective prosodic and semantic components from speech and the speaker's facial expression. Affective prosody may have a special role by virtue of its dual-nature; pre-verbal on one side and accompanying semantic content on the other. This consideration led us to hypothesize that it could act transversely, encompassing a wide temporal window involving the processing of facial expressions and semantic content expressed by the speaker. This would allow powerful communication in contexts of potential urgency such as witnessing the speaker's physical pain. Seventeen participants were shown with faces preceded by verbal reports of pain. Facial expressions, intelligibility of the semantic content of the report (i.e., participants' mother tongue vs. fictional language) and the affective prosody of the report (neutral vs. painful) were manipulated. We monitored event-related potentials (ERPs) time-locked to the onset of the faces as a function of semantic content intelligibility and affective prosody of the verbal reports. We found that affective prosody may interact with facial expressions and semantic content in two successive temporal windows, supporting its role as a transverse communication cue.

  17. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  18. 2D DOST based local phase pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.

  19. Affect of the unconscious: Visually suppressed angry faces modulate our decisions

    PubMed Central

    Pajtas, Petra E.; Mahon, Bradford Z.; Nakayama, Ken; Caramazza, Alfonso

    2016-01-01

    Emotional and affective processing imposes itself over cognitive processes and modulates our perception of the surrounding environment. In two experiments, we addressed the issue of whether nonconscious processing of affect can take place even under deep states of unawareness, such as those induced by interocular suppression techniques, and can elicit an affective response that can influence our understanding of the surrounding environment. In Experiment 1, participants judged the likeability of an unfamiliar item—a Chinese character—that was preceded by a face expressing a particular emotion (either happy or angry). The face was rendered invisible through an interocular suppression technique (continuous flash suppression; CFS). In Experiment 2, backward masking (BM), a less robust masking technique, was used to render the facial expressions invisible. We found that despite equivalent phenomenological suppression of the visual primes under CFS and BM, different patterns of affective processing were obtained with the two masking techniques. Under BM, nonconscious affective priming was obtained for both happy and angry invisible facial expressions. However, under CFS, nonconscious affective priming was obtained only for angry facial expressions. We discuss an interpretation of this dissociation between affective processing and visual masking techniques in terms of distinct routes from the retina to the amygdala. PMID:23224765

  20. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  1. Quantifying facial paralysis using the Kinect v2.

    PubMed

    Gaber, Amira; Taher, Mona F; Wahed, Manal Abdel

    2015-01-01

    Assessment of facial paralysis (FP) and quantitative grading of facial asymmetry are essential in order to quantify the extent of the condition as well as to follow its improvement or progression. As such, there is a need for an accurate quantitative grading system that is easy to use, inexpensive and has minimal inter-observer variability. A comprehensive automated system to quantify and grade FP is the main objective of this work. An initial prototype has been presented by the authors. The present research aims to enhance the accuracy and robustness of one of this system's modules: the resting symmetry module. This is achieved by including several modifications to the computation method of the symmetry index (SI) for the eyebrows, eyes and mouth. These modifications are the gamma correction technique, the area of the eyes, and the slope of the mouth. The system was tested on normal subjects and showed promising results. The mean SI of the eyebrows decreased slightly from 98.42% to 98.04% using the modified method while the mean SI for the eyes and mouth increased from 96.93% to 99.63% and from 95.6% to 98.11% respectively while using the modified method. The system is easy to use, inexpensive, automated and fast, has no inter-observer variability and is thus well suited for clinical use.

  2. The impact of facial abnormalities and their spatial position on perception of cuteness and attractiveness of infant faces

    PubMed Central

    Lewis, Jennifer; Roberson, Debi

    2017-01-01

    Research has demonstrated that how “cute” an infant is perceived to be has consequences for caregiving. Infants with facial abnormalities receive lower ratings of cuteness, but relatively little is known about how different abnormalities and their location affect these aesthetic judgements. The objective of the current study was to compare the impact of different abnormalities on the perception of infant faces, while controlling for infant identity. In two experiments, adult participants gave ratings of cuteness and attractiveness in response to face images that had been edited to introduce common facial abnormalities. Stimulus faces displayed either a haemangioma (a small, benign birth mark), strabismus (an abnormal alignment of the eyes) or a cleft lip (an abnormal opening in the upper lip). In Experiment 1, haemangioma had less of a detrimental effect on ratings than the more severe abnormalities. In Experiment 2, we manipulated the position of a haemangioma on the face. We found small but robust effects of this position, with abnormalities in the top and on the left of the face receiving lower cuteness ratings. This is consistent with previous research showing that people attend more to the top of the face (particularly the eyes) and to the left hemifield. PMID:28749958

  3. The comparison of robust partial least squares regression with robust principal component regression on a real

    NASA Astrophysics Data System (ADS)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  4. Emotion identification and aging: Behavioral and neural age-related changes.

    PubMed

    Gonçalves, Ana R; Fernandes, Carina; Pasion, Rita; Ferreira-Santos, Fernando; Barbosa, Fernando; Marques-Teixeira, João

    2018-05-01

    Aging is known to alter the processing of facial expressions of emotion (FEE), however the impact of this alteration is less clear. Additionally, there is little information about the temporal dynamics of the neural processing of facial affect. We examined behavioral and neural age-related changes in the identification of FEE using event-related potentials. Furthermore, we analyze the relationship between behavioral/neural responses and neuropsychological functioning. To this purpose, 30 younger adults, 29 middle-aged adults and 26 older adults identified FEE. The behavioral results showed a similar performance between groups. The neural results showed no significant differences between groups for the P100 component and an increased N170 amplitude in the older group. Furthermore, a pattern of asymmetric activation was evident in the N170 component. Results also suggest deficits in facial feature decoding abilities, reflected by a reduced N250 amplitude in older adults. Neuropsychological functioning predicts P100 modulation, but does not seem to influence emotion identification ability. The findings suggest the existence of a compensatory function that would explain the age-equivalent performance in emotion identification. The study may help future research addressing behavioral and neural processes involved on processing of FEE in neurodegenerative conditions. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  5. Restoration of Trigeminal Cutaneous Sensation with Cross-Face Sural Nerve Grafts: A Novel Approach to Facial Sensory Rehabilitation.

    PubMed

    Catapano, Joseph; Scholl, David; Ho, Emily; Zuker, Ronald M; Borschel, Gregory H

    2015-09-01

    Although treating facial palsy is considered debilitating for patients, trigeminal nerve palsy and sensory deficits of the face are overlooked components of disability. Complete anesthesia leaves patients susceptible to occult injury, and facial sensation is an important component of interaction and activities of daily living. Sensory reconstruction is well established in the restoration of hand sensation; however, only one previous report proposed a surgical strategy for sensory nerve reconstruction of the face with use of nerve transfers. Nerve transfers, when used alone, have limited application because of their restricted arc of rotation in the face; extending their arc by adding nerve grafts greatly expands their utility. The following cases demonstrate the early results after V2 and V3 reconstruction with cross-face nerve grafts in three patients with acquired trigeminal nerve palsy. Cross-face nerve grafts using the sural nerve permit more proximal reconstruction of the infraorbital and mental nerves, which allows reinnervation of their entire cutaneous distribution. All patients demonstrated improved sensation in the reconstructed dermatomes, and no patients reported donor-site abnormalities. Cross-face nerve grafts result in minimal donor-site morbidity and are promising as a surgical strategy to address sensory deficits of the face. Therapeutic, V.

  6. Tuning the developing brain to social signals of emotions

    PubMed Central

    Leppänen, Jukka M.; Nelson, Charles A.

    2010-01-01

    PREFACE Humans in diverse cultures develop a similar capacity to recognize the emotional signals of different facial expressions. This capacity is mediated by a brain network that involves emotion-related brain circuits and higher-level visual representation areas. Recent studies suggest that the key components of this network begin to emerge early in life. The studies also suggest that initial biases in emotion-related brain circuits and the early coupling of these circuits and cortical perceptual areas provides a foundation for a rapid acquisition of representations of those facial features that denote specific emotions. PMID:19050711

  7. Neural Processing of Facial Identity and Emotion in Infants at High-Risk for Autism Spectrum Disorders

    PubMed Central

    Fox, Sharon E.; Wagner, Jennifer B.; Shrock, Christine L.; Tager-Flusberg, Helen; Nelson, Charles A.

    2013-01-01

    Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7-month-old infants at high-risk for developing autism and typically developing controls at low-risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near-Infrared Spectroscopy. In addition, we employed independent component analysis, as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity. PMID:23576966

  8. Face recognition using an enhanced independent component analysis approach.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2007-03-01

    This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.

  9. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  10. The facial massage reduced anxiety and negative mood status, and increased sympathetic nervous activity.

    PubMed

    Hatayama, Tomoko; Kitamura, Shingo; Tamura, Chihiro; Nagano, Mayumi; Ohnuki, Koichiro

    2008-12-01

    The aim of this study was to clarify the effects of 45 min of facial massage on the activity of autonomic nervous system, anxiety and mood in 32 healthy women. Autonomic nervous activity was assessed by heart rate variability (HRV) with spectral analysis. In the spectral analysis of HRV, we evaluated the high-frequency components (HF) and the low- to high-frequency ratio (LF/HF ratio), reflecting parasympathetic nervous activity and sympathetic nervous activity, respectively. The State Trait Anxiety Inventory (STAI) and the Profile of Mood Status (POMS) were administered to evaluate psychological status. The score of STAI and negative scale of POMS were significantly reduced following the massage, and only the LF/HF ratio was significantly enhanced after the massage. It was concluded that the facial massage might refresh the subjects by reducing their psychological distress and activating the sympathetic nervous system.

  11. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  12. [Negative symptoms, emotion and cognition in schizophrenia].

    PubMed

    Fakra, E; Belzeaux, R; Azorin, J-M; Adida, M

    2015-12-01

    For a long time, treatment of schizophrenia has been essentially focussed on positive symptoms managing. Yet, even if these symptoms are the most noticeable, negative symptoms are more enduring, resistant to pharmacological treatment and associated with a worse prognosis. In the two last decades, attention has shift towards cognitive deficit, as this deficit is most robustly associated to functional outcome. But it appears that the modest improvement in cognition, obtained in schizophrenia through pharmacological treatment or, more purposely, by cognitive enhancement therapy, has only lead to limited amelioration of functional outcome. Authors have claimed that pure cognitive processes, such as those evaluated and trained in lots of these programs, may be too distant from real-life conditions, as the latter are largely based on social interactions. Consequently, the field of social cognition, at the interface of cognition and emotion, has emerged. In a first part of this article we examined the links, in schizophrenia, between negative symptoms, cognition and emotions from a therapeutic standpoint. Nonetheless, investigation of emotion in schizophrenia may also hold relevant premises for understanding the physiopathology of this disorder. In a second part, we propose to illustrate this research by relying on the heuristic value of an elementary marker of social cognition, facial affect recognition. Facial affect recognition has been repeatedly reported to be impaired in schizophrenia and some authors have argued that this deficit could constitute an endophenotype of the illness. We here examined how facial affect processing has been used to explore broader emotion dysfunction in schizophrenia, through behavioural and imaging studies. In particular, fMRI paradigms using facial affect have shown particular patterns of amygdala engagement in schizophrenia, suggesting an intact potential to elicit the limbic system which may however not be advantageous. Finally, we analysed facial affect processing on a cognitive-perceptual level, and the aptitude in schizophrenia to manipulate featural and configural information in faces. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  13. Influence of spatial frequency and emotion expression on face processing in patients with panic disorder.

    PubMed

    Shim, Miseon; Kim, Do-Won; Yoon, Sunkyung; Park, Gewnhi; Im, Chang-Hwan; Lee, Seung-Hwan

    2016-06-01

    Deficits in facial emotion processing is a major characteristic of patients with panic disorder. It is known that visual stimuli with different spatial frequencies take distinct neural pathways. This study investigated facial emotion processing involving stimuli presented at broad, high, and low spatial frequencies in patients with panic disorder. Eighteen patients with panic disorder and 19 healthy controls were recruited. Seven event-related potential (ERP) components: (P100, N170, early posterior negativity (EPN); vertex positive potential (VPP), N250, P300; and late positive potential (LPP)) were evaluated while the participants looked at fearful and neutral facial stimuli presented at three spatial frequencies. When a fearful face was presented, panic disorder patients showed a significantly increased P100 amplitude in response to low spatial frequency compared to high spatial frequency; whereas healthy controls demonstrated significant broad spatial frequency dependent processing in P100 amplitude. Vertex positive potential amplitude was significantly increased in high and broad spatial frequency, compared to low spatial frequency in panic disorder. Early posterior negativity amplitude was significantly different between HSF and BSF, and between LSF and BSF processing in both groups, regardless of facial expression. The possibly confounding effects of medication could not be controlled. During early visual processing, patients with panic disorder prefer global to detailed information. However, in later processing, panic disorder patients overuse detailed information for the perception of facial expressions. These findings suggest that unique spatial frequency-dependent facial processing could shed light on the neural pathology associated with panic disorder. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  15. Dissociation of Neural Substrates of Response Inhibition to Negative Information between Implicit and Explicit Facial Go/Nogo Tasks: Evidence from an Electrophysiological Study

    PubMed Central

    Sun, Shiyue; Carretié, Luis; Zhang, Lei; Dong, Yi; Zhu, Chunyan; Luo, Yuejia; Wang, Kai

    2014-01-01

    Background Although ample evidence suggests that emotion and response inhibition are interrelated at the behavioral and neural levels, neural substrates of response inhibition to negative facial information remain unclear. Thus we used event-related potential (ERP) methods to explore the effects of explicit and implicit facial expression processing in response inhibition. Methods We used implicit (gender categorization) and explicit emotional Go/Nogo tasks (emotion categorization) in which neutral and sad faces were presented. Electrophysiological markers at the scalp and the voxel level were analyzed during the two tasks. Results We detected a task, emotion and trial type interaction effect in the Nogo-P3 stage. Larger Nogo-P3 amplitudes during sad conditions versus neutral conditions were detected with explicit tasks. However, the amplitude differences between the two conditions were not significant for implicit tasks. Source analyses on P3 component revealed that right inferior frontal junction (rIFJ) was involved during this stage. The current source density (CSD) of rIFJ was higher with sad conditions compared to neutral conditions for explicit tasks, rather than for implicit tasks. Conclusions The findings indicated that response inhibition was modulated by sad facial information at the action inhibition stage when facial expressions were processed explicitly rather than implicitly. The rIFJ may be a key brain region in emotion regulation. PMID:25330212

  16. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    PubMed

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Brain potentials indicate the effect of other observers' emotions on perceptions of facial attractiveness.

    PubMed

    Huang, Yujing; Pan, Xuwei; Mo, Yan; Ma, Qingguo

    2016-03-23

    Perceptions of facial attractiveness are sensitive to emotional expression of the perceived face. However, little is known about whether the emotional expression on the face of another observer of the perceived face may have an effect on perceptions of facial attractiveness. The present study used event-related potential technique to examine social influence of the emotional expression on the face of another observer of the perceived face on perceptions of facial attractiveness. The experiment consisted of two phases. In the first phase, a neutral target face was paired with two images of individuals gazing at the target face with smiling, fearful or neutral expressions. In the second phase, participants were asked to judge the attractiveness of the target face. We found that a target face was more attractive when other observers positively gazing at the target face in contrast to the condition when other observers were negative. Additionally, the results of brain potentials showed that the visual positive component P3 with peak latency from 270 to 330 ms was larger after participants observed the target face paired with smiling individuals than the target face paired with neutral individuals. These findings suggested that facial attractiveness of an individual may be influenced by the emotional expression on the face of another observer of the perceived face. Copyright © 2016. Published by Elsevier Ireland Ltd.

  18. Robustness of remote stress detection from visible spectrum recordings

    NASA Astrophysics Data System (ADS)

    Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.

    2016-05-01

    In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.

  19. Dramatic Enhancement of Graphene Oxide/Silk Nanocomposite Membranes: Increasing Toughness, Strength, and Young's modulus via Annealing of Interfacial Structures.

    PubMed

    Wang, Yaxian; Ma, Ruilong; Hu, Kesong; Kim, Sunghan; Fang, Guangqiang; Shao, Zhengzhong; Tsukruk, Vladimir V

    2016-09-21

    We demonstrate that stronger and more robust nacre-like laminated GO (graphene oxide)/SF (silk fibroin) nanocomposite membranes can be obtained by selectively tailoring the interfacial interactions between "bricks"-GO sheets and "mortar"-silk interlayers via controlled water vapor annealing. This facial annealing process relaxes the secondary structure of silk backbones confined between flexible GO sheets. The increased mobility leads to a significant increase in ultimate strength (by up to 41%), Young's modulus (up to 75%) and toughness (up to 45%). We suggest that local silk recrystallization is initiated in the proximity to GO surface by the hydrophobic surface regions serving as nucleation sites for β-sheet domains formation and followed by SF assembly into nanofibrils. Strong hydrophobic-hydrophobic interactions between GO layers with SF nanofibrils result in enhanced shear strength of layered packing. This work presented here not only gives a better understanding of SF and GO interfacial interactions, but also provides insight on how to enhance the mechanical properties for the nacre-mimic nanocomposites by focusing on adjusting the delicate interactions between heterogeneous "brick" and adaptive "mortar" components with water/temperature annealing routines.

  20. Quantitative analysis of fetal facial morphology using 3D ultrasound and statistical shape modeling: a feasibility study.

    PubMed

    Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C

    2017-07-01

    The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4  weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A Patient-Assessed Morbidity to Evaluate Outcome in Surgically Treated Vestibular Schwannomas.

    PubMed

    Al-Shudifat, Abdul Rahman; Kahlon, Babar; Höglund, Peter; Lindberg, Sven; Magnusson, Måns; Siesjo, Peter

    2016-10-01

    Outcome after treatment of vestibular schwannomas can be evaluated by health providers as mortality, recurrence, performance, and morbidity. Because mortality and recurrence are rare events, evaluation has to focus on performance and morbidity. The latter has mostly been reported by health providers. In the present study, we validate 2 new scales for patient-assessed performance and morbidity in comparison with different outcome tools, such as quality of life (QOL) (European Quality of Life-5 dimensions [EQ-5D]), facial nerve score, and work capacity. There were 167 total patients in a retrospective (n = 90) and prospective (n = 50) cohort of surgically treated vestibular schwannomas. A new patient-assessed morbidity score (paMS), a patient-assessed Karnofsky score (paKPS), the patient-assessed QOL (EQ-5D) score, work capacity, and the House-Brackmann facial nerve score were used as outcome measures. Analysis of paMS components and their relation to other outcomes was done as uni- and multivariate analysis. All outcome instruments, except EQ-5D and paKPS, showed a significant decrease postoperatively. Only the facial nerve score (House-Brackmann facial nerve score) differed significantly between the retrospective and prospective cohorts. Out of the 16 components of the paMS, hearing dysfunction, tear dysfunction, balance dysfunction, and eye irritation were most often reported. Both paMS and EQ-5D correlated significantly with work capacity. Standard QOL and performance instruments may not be sufficiently sensitive or specific to measure outcome at the cohort level after surgical treatment of vestibular schwannomas. A morbidity score may yield more detailed information on symptoms that can be relevant for rehabilitation and occupational training after surgery. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Coding and quantification of a facial expression for pain in lambs.

    PubMed

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Bidirectional communication between amygdala and fusiform gyrus during facial recognition.

    PubMed

    Herrington, John D; Taylor, James M; Grupe, Daniel W; Curby, Kim M; Schultz, Robert T

    2011-06-15

    Decades of research have documented the specialization of fusiform gyrus (FG) for facial information processes. Recent theories indicate that FG activity is shaped by input from amygdala, but effective connectivity from amygdala to FG remains undocumented. In this fMRI study, 39 participants completed a face recognition task. 11 participants underwent the same experiment approximately four months later. Robust face-selective activation of FG, amygdala, and lateral occipital cortex were observed. Dynamic causal modeling and Bayesian Model Selection (BMS) were used to test the intrinsic connections between these structures, and their modulation by face perception. BMS results strongly favored a dynamic causal model with bidirectional, face-modulated amygdala-FG connections. However, the right hemisphere connections diminished at time 2, with the face modulation parameter no longer surviving Bonferroni correction. These findings suggest that amygdala strongly influences FG function during face perception, and that this influence is shaped by experience and stimulus salience. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  5. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  6. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder

    PubMed Central

    Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.

    2016-01-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed. PMID:26743637

  7. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder.

    PubMed

    Garman, Heather D; Spaulding, Christine J; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P; Lerner, Matthew D

    2016-12-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed.

  8. A PCA-Based method for determining craniofacial relationship and sexual dimorphism of facial shapes.

    PubMed

    Shui, Wuyang; Zhou, Mingquan; Maddock, Steve; He, Taiping; Wang, Xingce; Deng, Qingqiong

    2017-11-01

    Previous studies have used principal component analysis (PCA) to investigate the craniofacial relationship, as well as sex determination using facial factors. However, few studies have investigated the extent to which the choice of principal components (PCs) affects the analysis of craniofacial relationship and sexual dimorphism. In this paper, we propose a PCA-based method for visual and quantitative analysis, using 140 samples of 3D heads (70 male and 70 female), produced from computed tomography (CT) images. There are two parts to the method. First, skull and facial landmarks are manually marked to guide the model's registration so that dense corresponding vertices occupy the same relative position in every sample. Statistical shape spaces of the skull and face in dense corresponding vertices are constructed using PCA. Variations in these vertices, captured in every principal component (PC), are visualized to observe shape variability. The correlations of skull- and face-based PC scores are analysed, and linear regression is used to fit the craniofacial relationship. We compute the PC coefficients of a face based on this craniofacial relationship and the PC scores of a skull, and apply the coefficients to estimate a 3D face for the skull. To evaluate the accuracy of the computed craniofacial relationship, the mean and standard deviation of every vertex between the two models are computed, where these models are reconstructed using real PC scores and coefficients. Second, each PC in facial space is analysed for sex determination, for which support vector machines (SVMs) are used. We examined the correlation between PCs and sex, and explored the extent to which the choice of PCs affects the expression of sexual dimorphism. Our results suggest that skull- and face-based PCs can be used to describe the craniofacial relationship and that the accuracy of the method can be improved by using an increased number of face-based PCs. The results show that the accuracy of the sex classification is related to the choice of PCs. The highest sex classification rate is 91.43% using our method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Reading emotions from faces in two indigenous societies.

    PubMed

    Crivelli, Carlos; Jarillo, Sergio; Russell, James A; Fernández-Dols, José-Miguel

    2016-07-01

    That all humans recognize certain specific emotions from their facial expression-the Universality Thesis-is a pillar of research, theory, and application in the psychology of emotion. Its most rigorous test occurs in indigenous societies with limited contact with external cultural influences, but such tests are scarce. Here we report 2 such tests. Study 1 was of children and adolescents (N = 68; aged 6-16 years) of the Trobriand Islands (Papua New Guinea, South Pacific) with a Western control group from Spain (N = 113, of similar ages). Study 2 was of children and adolescents (N = 36; same age range) of Matemo Island (Mozambique, Africa). In both studies, participants were shown an array of prototypical facial expressions and asked to point to the person feeling a specific emotion: happiness, fear, anger, disgust, or sadness. The Spanish control group matched faces to emotions as predicted by the Universality Thesis: matching was seen on 83% to 100% of trials. For the indigenous societies, in both studies, the Universality Thesis was moderately supported for happiness: smiles were matched to happiness on 58% and 56% of trials, respectively. For other emotions, however, results were even more modest: 7% to 46% in the Trobriand Islands and 22% to 53% in Matemo Island. These results were robust across age, gender, static versus dynamic display of the facial expressions, and between- versus within-subjects design. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Decoding Task and Stimulus Representations in Face-responsive Cortex

    PubMed Central

    Kliemann, Dorit; Jacoby, Nir; Anzellotti, Stefano; Saxe, Rebecca R.

    2017-01-01

    Faces provide rich social information about others’ stable traits (e.g., age) and fleeting states of mind (e.g., emotional expression). While some of these facial aspects may be processed automatically, observers can also deliberately attend to some features while ignoring others. It remains unclear how internal goals (e.g., task context) influence the representational geometry of variable and stable facial aspects in face-responsive cortex. We investigated neural response patterns related to decoding i) the intention to attend to a facial aspect before its perception, ii) the attended aspect of a face and iii) stimulus properties. We measured neural responses while subjects watched videos of dynamic positive and negative expressions, and judged the age or the expression’s valence. Split-half multivoxel pattern analyses (MVPA) showed that (i) the intention to attend to a specific aspect of a face can be decoded from left fronto-lateral, but not face-responsive regions; (ii) during face perception, the attend aspect (age vs emotion) could be robustly decoded from almost all face-responsive regions; and (iii) a stimulus property (valence), was represented in right posterior superior temporal sulcus and medial prefrontal cortices. The effect of deliberately shifting the focus of attention on representations suggest a powerful influence of top-down signals on cortical representation of social information, varying across cortical regions, likely reflecting neural flexibility to optimally integrate internal goals and dynamic perceptual input. PMID:27978778

  11. The face and its emotion: right N170 deficits in structural processing and early emotional discrimination in schizophrenic patients and relatives.

    PubMed

    Ibáñez, Agustín; Riveros, Rodrigo; Hurtado, Esteban; Gleichgerrcht, Ezequiel; Urquina, Hugo; Herrera, Eduar; Amoruso, Lucía; Reyes, Migdyrai Martin; Manes, Facundo

    2012-01-30

    Previous studies have reported facial emotion recognition impairments in schizophrenic patients, as well as abnormalities in the N170 component of the event-related potential. Current research on schizophrenia highlights the importance of complexly-inherited brain-based deficits. In order to examine the N170 markers of face structural and emotional processing, DSM-IV diagnosed schizophrenia probands (n=13), unaffected first-degree relatives from multiplex families (n=13), and control subjects (n=13) matched by age, gender and educational level, performed a categorization task which involved words and faces with positive and negative valence. The N170 component, while present in relatives and control subjects, was reduced in patients, not only for faces, but also for face-word differences, suggesting a deficit in structural processing of stimuli. Control subjects showed N170 modulation according to the valence of facial stimuli. However, this discrimination effect was found to be reduced both in patients and relatives. This is the first report showing N170 valence deficits in relatives. Our results suggest a generalized deficit affecting the structural encoding of faces in patients, as well as the emotion discrimination both in patients and relatives. Finally, these findings lend support to the notion that cortical markers of facial discrimination can be validly considered as vulnerability markers. © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Chimeric anterolateral thigh free flap for reconstruction of complex cranio-orbito-facial defects after skull base cancers resection.

    PubMed

    Cherubino, Mario; Turri-Zanoni, Mario; Battaglia, Paolo; Giudice, Marco; Pellegatta, Igor; Tamborini, Federico; Maggiulli, Francesca; Guzzetti, Luca; Di Giovanna, Danilo; Bignami, Maurizio; Calati, Carolina; Castelnuovo, Paolo; Valdatta, Luigi

    2017-01-01

    Complex cranio-orbito-facial defects after skull base cancers resection entail a functional and esthetic reconstruction. The introduction of endoscopic assisted techniques for excision surgery with the advances in reconstructive surgery and anesthesiology allowed to improve the management of such critical patients. We report a series of chimeric anterolateral thigh (ALT) flaps used to reconstruct complex cranio-orbital-facial defects after skull base surgery. A retrospective review of patients that underwent cranio-orbito-facial reconstruction using a chimeric ALT flap from March 2013 to October 2015 at a single tertiary care referral Institute was performed. All patients were affected by locally-advanced malignant tumor and the resulting defects involved the skull base in all cases. The ALT flaps were perforator-based flaps with different components: fascia, skin and muscle. The different flap territories had independent vascular supply and were independent of any physical interconnection except where linked by a common source vessel. Ten patients were included in the study. Three patients underwent adjuvant radiotherapy and to chemotherapy. The mean hospitalization time was 21 days (range, 8-24 days). One failure was observed. After a mean follow-up of 12.4 months, 3 patients died of the disease, 2 are alive with disease, while 5 patients (50%) are currently alive without evidence of disease. Chimeric ALT flap is a reliable and versatile reconstructive option for complex cranio-orbito-facial defects resulting from skull base surgery. The chimeric flap composed of different territories proved to be adequate for a patient-tailored three-dimensional reconstruction of the defects as well as able to resist to the postoperative adjuvant treatments. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. Association study of Demodex bacteria and facial dermatoses based on DGGE technique.

    PubMed

    Zhao, YaE; Yang, Fan; Wang, RuiLing; Niu, DongLing; Mu, Xin; Yang, Rui; Hu, Li

    2017-03-01

    The role of bacteria is unclear in the facial skin lesions caused by Demodex. To shed some light on this issue, we conducted a case-control study comparing cases with facial dermatoses with controls with healthy skin using denaturing gradient gel electrophoresis (DGGE) technique. The bacterial diversity, composition, and principal component were analyzed for Demodex bacteria and the matched facial skin bacteria. The result of mite examination showed that all 33 cases were infected with Demodex folliculorum (D. f), whereas 16 out of the 30 controls were infected with D. f, and the remaining 14 controls were infected with Demodex brevis (D. b). The diversity analysis showed that only evenness index presented statistical difference between mite bacteria and matched skin bacteria in the cases. The composition analysis showed that the DGGE bands of cases and controls were assigned to 12 taxa of 4 phyla, including Proteobacteria (39.37-52.78%), Firmicutes (2.7-26.77%), Actinobacteria (0-5.71%), and Bacteroidetes (0-2.08%). In cases, the proportion of Staphylococcus in Firmicutes was significantly higher than that in D. f controls and D. b controls, while the proportion of Sphingomonas in Proteobacteria was significantly lower than that in D. f controls. The between-group analysis (BGA) showed that all the banding patterns clustered into three groups, namely, D. f cases, D. f controls, and D. b controls. Our study suggests that the bacteria in Demodex should come from the matched facial skin bacteria. Proteobacteria and Firmicutes are the two main taxa. The increase of Staphylococcus and decrease of Sphingomonas might be associated with the development of facial dermatoses.

  14. Influence of maxillary posterior discrepancy on upper molar vertical position and facial vertical dimensions in subjects with or without skeletal open bite

    PubMed Central

    Aliaga-Del Castillo, Aron; Pérez-Vargas, Luis Fernando; Flores-Mir, Carlos

    2016-01-01

    Summary Objectives: To determine the influence of maxillary posterior discrepancy on upper molar vertical position and dentofacial vertical dimensions in individuals with or without skeletal open bite (SOB). Materials and methods: Pre-treatment lateral cephalograms of 139 young adults were examined. The sample was divided into eight groups categorized according to their sagittal and vertical skeletal facial growth pattern and maxillary posterior discrepancy (present or absent). Upper molar vertical position, overbite, lower anterior facial height and facial height ratio were measured. Independent t-test was performed to determine differences between the groups considering maxillary posterior discrepancy. Principal component analysis and MANCOVA test were also used. Results: No statistically significant differences were found comparing the molar vertical position according to maxillary posterior discrepancy for the SOB Class I group or the group with adequate overbite. Significant differences were found in SOB Class II and Class III groups. In addition, an increased molar vertical position was found in the group without posterior discrepancy. Limitations: Some variables closely related with the individual’s intrinsic craniofacial development that could influence the evaluated vertical measurements were not considered. Conclusions and implications: Overall maxillary posterior discrepancy does not appear to have a clear impact on upper molar vertical position or facial vertical dimensions. Only the SOB Class III group without posterior discrepancy had a significant increased upper molar vertical position. PMID:26385786

  15. Influence of maxillary posterior discrepancy on upper molar vertical position and facial vertical dimensions in subjects with or without skeletal open bite.

    PubMed

    Arriola-Guillén, Luis Ernesto; Aliaga-Del Castillo, Aron; Pérez-Vargas, Luis Fernando; Flores-Mir, Carlos

    2016-06-01

    To determine the influence of maxillary posterior discrepancy on upper molar vertical position and dentofacial vertical dimensions in individuals with or without skeletal open bite (SOB). Pre-treatment lateral cephalograms of 139 young adults were examined. The sample was divided into eight groups categorized according to their sagittal and vertical skeletal facial growth pattern and maxillary posterior discrepancy (present or absent). Upper molar vertical position, overbite, lower anterior facial height and facial height ratio were measured. Independent t-test was performed to determine differences between the groups considering maxillary posterior discrepancy. Principal component analysis and MANCOVA test were also used. No statistically significant differences were found comparing the molar vertical position according to maxillary posterior discrepancy for the SOB Class I group or the group with adequate overbite. Significant differences were found in SOB Class II and Class III groups. In addition, an increased molar vertical position was found in the group without posterior discrepancy. Some variables closely related with the individual's intrinsic craniofacial development that could influence the evaluated vertical measurements were not considered. Overall maxillary posterior discrepancy does not appear to have a clear impact on upper molar vertical position or facial vertical dimensions. Only the SOB Class III group without posterior discrepancy had a significant increased upper molar vertical position. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. Design for robustness of unique, multi-component engineering systems

    NASA Astrophysics Data System (ADS)

    Shelton, Kenneth A.

    2007-12-01

    The purpose of this research is to advance the science of conceptual designing for robustness in unique, multi-component engineering systems. Robustness is herein defined as the ability of an engineering system to operate within a desired performance range even if the actual configuration has differences from specifications within specified tolerances. These differences are caused by three sources, namely manufacturing errors, system degradation (operational wear and tear), and parts availability. Unique, multi-component engineering systems are defined as systems produced in unique or very small production numbers. They typically have design and manufacturing costs on the order of billions of dollars, and have multiple, competing performance objectives. Design time for these systems must be minimized due to competition, high manpower costs, long manufacturing times, technology obsolescence, and limited available manpower expertise. Most importantly, design mistakes cannot be easily corrected after the systems are operational. For all these reasons, robustness of these systems is absolutely critical. This research examines the space satellite industry in particular. Although inherent robustness assurance is absolutely critical, it is difficult to achieve in practice. The current state of the art for robustness in the industry is to overdesign components and subsystems with redundancy and margin. The shortfall is that it is not known if the added margins were either necessary or sufficient given the risk management preferences of the designer or engineering system customer. To address this shortcoming, new assessment criteria to evaluate robustness in design concepts have been developed. The criteria are comprised of the "Value Distance", addressing manufacturing errors and system degradation, and "Component Distance", addressing parts availability. They are based on an evolutionary computation format that uses a string of alleles to describe the components in the design concept. These allele values are unitless themselves, but map to both configuration descriptions and attribute values. The Value Distance and Component Distance are metrics that measure the relative differences between two design concepts using the allele values, and all differences in a population of design concepts are calculated relative to a reference design, called the "base design". The base design is the top-ranked member of the population in weighted terms of robustness and performance. Robustness is determined based on the change in multi-objective performance as Value Distance and Component Distance (and thus differences in design) increases. It is assessed as acceptable if differences in design configurations up to specified tolerances result in performance changes that remain within a specified performance range. The design configuration difference tolerances and performance range together define the designer's risk management preferences for the final design concepts. Additionally, a complementary visualization capability was developed, called the "Design Solution Topography". This concept allows the visualization of a population of design concepts, and is a 3-axis plot where each point represents an entire design concept. The axes are the Value Distance, Component Distance and Performance Objective. The key benefit of the Design Solution Topography is that it allows the designer to visually identify and interpret the overall robustness of the current population of design concepts for a particular performance objective. In a multi-objective problem, each performance objective has its own Design Solution Topography view. These new concepts are implemented in an evolutionary computation-based conceptual designing method called the "Design for Robustness Method" that produces robust design concepts. The design procedures associated with this method enable designers to evaluate and ensure robustness in selected designs that also perform within a desired performance range. The method uses an evolutionary computation-based procedure to generate populations of large numbers of alternative design concepts, which are assessed for robustness using the Value Distance, Component Distance and Design Solution Topography procedures. The Design for Robustness Method provides a working conceptual designing structure in which to implement and gain the benefits of these new concepts. In the included experiments, the method was used on several mathematical examples to demonstrate feasibility, which showed favorable results as compared to existing known methods. Furthermore, it was tested on a real-world satellite conceptual designing problem to illustrate the applicability and benefits to industry. Risk management insights were demonstrated for the robustness-related issues of manufacturing errors, operational degradation, parts availability, and impacts based on selections of particular types of components.

  17. The role of facial appearance on CEO selection after firm misconduct.

    PubMed

    Gomulya, David; Wong, Elaine M; Ormiston, Margaret E; Boeker, Warren

    2017-04-01

    [Correction Notice: An Erratum for this article was reported in Vol 102(4) of Journal of Applied Psychology (see record 2017-10684-001). The wrong figure files were used. All versions of this article have been corrected.] We investigate a particular aspect of CEO successor trustworthiness that may be critically important after a firm has engaged in financial misconduct. Specifically, drawing on prior research that suggests that facial appearance is one critical way in which trustworthiness is signaled, we argue that leaders who convey integrity, a component of trustworthiness, will be more likely to be selected as successors after financial restatement. We predict that such appointments garner more positive reactions by external observers such as investment analysts and the media because these CEOs are perceived as having greater integrity. In an archival study of firms that have announced financial restatements, we find support for our predictions. These findings have implications for research on CEO succession, leadership selection, facial appearance, and firm misconduct. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Judging emotional congruency: Explicit attention to situational context modulates processing of facial expressions of emotion.

    PubMed

    Diéguez-Risco, Teresa; Aguado, Luis; Albert, Jacobo; Hinojosa, José Antonio

    2015-12-01

    The influence of explicit evaluative processes on the contextual integration of facial expressions of emotion was studied in a procedure that required the participants to judge the congruency of happy and angry faces with preceding sentences describing emotion-inducing situations. Judgments were faster on congruent trials in the case of happy faces and on incongruent trials in the case of angry faces. At the electrophysiological level, a congruency effect was observed in the face-sensitive N170 component that showed larger amplitudes on incongruent trials. An interactive effect of congruency and emotion appeared on the LPP (late positive potential), with larger amplitudes in response to happy faces that followed anger-inducing situations. These results show that the deliberate intention to judge the contextual congruency of facial expressions influences not only processes involved in affective evaluation such as those indexed by the LPP but also earlier processing stages that are involved in face perception. Copyright © 2015. Published by Elsevier B.V.

  19. Perceiving emotions in neutral faces: expression processing is biased by affective person knowledge.

    PubMed

    Suess, Franziska; Rabovsky, Milena; Abdel Rahman, Rasha

    2015-04-01

    According to a widely held view, basic emotions such as happiness or anger are reflected in facial expressions that are invariant and uniquely defined by specific facial muscle movements. Accordingly, expression perception should not be vulnerable to influences outside the face. Here, we test this assumption by manipulating the emotional valence of biographical knowledge associated with individual persons. Faces of well-known and initially unfamiliar persons displaying neutral expressions were associated with socially relevant negative, positive or comparatively neutral biographical information. The expressions of faces associated with negative information were classified as more negative than faces associated with neutral information. Event-related brain potential modulations in the early posterior negativity, a component taken to reflect early sensory processing of affective stimuli such as emotional facial expressions, suggest that negative affective knowledge can bias the perception of faces with neutral expressions toward subjectively displaying negative emotions. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Characteristics of ballistic and blast injuries.

    PubMed

    Powers, David B; Delo, Robert I

    2013-03-01

    Ballistic injury wounds are formed by variable interrelated factors, such as the nature of the tissue, the compositional makeup of the bullet, distance to the target, and the velocity, shape, and mass of the of the projectile. This complex arrangement, with the ultimate outcome dependent on each other, makes the prediction of wounding potential difficult to assess. As the facial features are the component of the body most involved in a patient's personality and interaction with society, preservation of form, cosmesis, and functional outcome should remain the primary goals in the management of ballistic injury. A logical, sequential analysis of the injury patterns to the facial complex is an absolutely necessary component for the treatment of craniomaxillofacial ballistic injuries. Fortunately, these skill sets should be well honed in all craniomaxillofacial surgeons through their exposure to generalized trauma, orthognathic, oncologic, and cosmetic surgery patients. Identification of injured tissues, understanding the functional limitations of these injuries, and preservation of both hard and soft tissues minimizing the need for tissue replacement are paramount.

  1. The Influence Function of Principal Component Analysis by Self-Organizing Rule.

    PubMed

    Higuchi; Eguchi

    1998-07-28

    This article is concerned with a neural network approach to principal component analysis (PCA). An algorithm for PCA by the self-organizing rule has been proposed and its robustness observed through the simulation study by Xu and Yuille (1995). In this article, the robustness of the algorithm against outliers is investigated by using the theory of influence function. The influence function of the principal component vector is given in an explicit form. Through this expression, the method is shown to be robust against any directions orthogonal to the principal component vector. In addition, a statistic generated by the self-organizing rule is proposed to assess the influence of data in PCA.

  2. Skull shapes of the Lissodelphininae: radiation, adaptation and asymmetry.

    PubMed

    Galatius, Anders; Goodall, R Natalie P

    2016-06-01

    Within Delphinidae, the sub-family Lissodelphininae consists of 8 Southern Ocean species and 2 North Pacific species. Lissodelphininae is a result of recent phylogenetic revisions based on molecular methods. Thus, morphological radiation within the taxon has not been investigated previously. The sub-family consists of ecologically diverse groups such as (1) the Cephalorhynchus genus of 4 small species inhabiting coastal and shelf waters, (2) the robust species in the Lagenorhynchus genus with the coastal La. australis, the offshore La. cruciger, the pelagic species La. obscurus and La. obliquidens, and (3) the morphologically aberrant genus Lissodelphis. Here, the shapes of 164 skulls from adults of all 10 species were compared using 3-dimensional geometric morphometrics. The Lissodelphininae skulls were supplemented by samples of Lagenorhynchus albirostris and Delphinus delphis to obtain a context for the variation found within the subfamily. Principal components analysis was used to map the most important components of shape variation on phylogeny. The first component of shape variation described an elongation of the rostrum, lateral and dorsoventral compression of the neurocranium and smaller temporal fossa. The two Lissodelphis species were on the high extreme of this spectrum, while Lagenorhynchus australis, La. cruciger and Cephalorhynchus heavisidii were at the low extreme. Along the second component, La. cruciger was isolated from the other species by its expanded neurocranium and concave facial profile. Shape variation supports the gross phylogenetic relationships proposed by recent molecular studies. However, despite the great diversity of ecology and external morphology within the subfamily, shape variation of the feeding apparatus was modest, indicating a similar mode of feeding across the subfamily. All 10 species were similar in their pattern of skull asymmetry, but interestingly, two species using narrowband high frequency clicks (La. cruciger and C. hectori) were among the most asymmetric species, contradicting previous interpretations of odontocete skull asymmetry. J. Morphol. 277:776-785, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Brain response during the M170 time interval is sensitive to socially relevant information.

    PubMed

    Arviv, Oshrit; Goldstein, Abraham; Weeting, Janine C; Becker, Eni S; Lange, Wolf-Gero; Gilboa-Schechtman, Eva

    2015-11-01

    Deciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Pre-operative Screening and Manual Drilling Strategies to Reduce the Risk of Thermal Injury During Minimally Invasive Cochlear Implantation Surgery.

    PubMed

    Dillon, Neal P; Fichera, Loris; Kesler, Kyle; Zuniga, M Geraldine; Mitchell, Jason E; Webster, Robert J; Labadie, Robert F

    2017-09-01

    This article presents the development and experimental validation of a methodology to reduce the risk of thermal injury to the facial nerve during minimally invasive cochlear implantation surgery. The first step in this methodology is a pre-operative screening process, in which medical imaging is used to identify those patients that present a significant risk of developing high temperatures at the facial nerve during the drilling phase of the procedure. Such a risk is calculated based on the density of the bone along the drilling path and the thermal conductance between the drilling path and the nerve, and provides a criterion to exclude high-risk patients from receiving the minimally invasive procedure. The second component of the methodology is a drilling strategy for manually-guided drilling near the facial nerve. The strategy utilizes interval drilling and mechanical constraints to enable better control over the procedure and the resulting generation of heat. The approach is tested in fresh cadaver temporal bones using a thermal camera to monitor temperature near the facial nerve. Results indicate that pre-operative screening may successfully exclude high-risk patients and that the proposed drilling strategy enables safe drilling for low-to-moderate risk patients.

  5. Facial patterns in a tropical social wasp correlate with colony membership

    NASA Astrophysics Data System (ADS)

    Baracchi, David; Turillazzi, Stefano; Chittka, Lars

    2016-10-01

    Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.

  6. "The role of facial appearance on CEO selection after firm misconduct:" Correction to Gomulya et al. (2016).

    PubMed

    2017-04-01

    Reports an error in "The Role of Facial Appearance on CEO Selection After Firm Misconduct" by David Gomulya, Elaine M. Wong, Margaret E. Ormiston and Warren Boeker ( Journal of Applied Psychology , Advanced Online Publication, Dec 19, 2016, np). The wrong figure files were used. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-60831-001.) We investigate a particular aspect of CEO successor trustworthiness that may be critically important after a firm has engaged in financial misconduct. Specifically, drawing on prior research that suggests that facial appearance is one critical way in which trustworthiness is signaled, we argue that leaders who convey integrity, a component of trustworthiness, will be more likely to be selected as successors after financial restatement. We predict that such appointments garner more positive reactions by external observers such as investment analysts and the media because these CEOs are perceived as having greater integrity. In an archival study of firms that have announced financial restatements, we find support for our predictions. These findings have implications for research on CEO succession, leadership selection, facial appearance, and firm misconduct. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.

    PubMed

    Brechet, Claire

    2017-01-01

    The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.

  8. Prepreg and Melt Infiltration Technology Developed for Affordable, Robust Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, Mrityunjay; Petko, Jeannie F.

    2004-01-01

    Affordable fiber-reinforced ceramic matrix composites with multifunctional properties are critically needed for high-temperature aerospace and space transportation applications. These materials have various applications in advanced high-efficiency and high-performance engines, airframe and propulsion components for next-generation launch vehicles, and components for land-based systems. A number of these applications require materials with specific functional characteristics: for example, thick component, hybrid layups for environmental durability and stress management, and self-healing and smart composite matrices. At present, with limited success and very high cost, traditional composite fabrication technologies have been utilized to manufacture some large, complex-shape components of these materials. However, many challenges still remain in developing affordable, robust, and flexible manufacturing technologies for large, complex-shape components with multifunctional properties. The prepreg and melt infiltration (PREMI) technology provides an affordable and robust manufacturing route for low-cost, large-scale production of multifunctional ceramic composite components.

  9. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  10. The Role of the Limbic System in Human Communication.

    ERIC Educational Resources Information Center

    Lamendella, John T.

    Linguistics has chosen as its niche the language component of human communication and, naturally enough, the neurolinguist has concentrated on lateralized language systems of the cerebral hemispheres. However, decoding a speaker's total message requires attention to gestures, facial expressions, and prosodic features, as well as other somatic and…

  11. Similar exemplar pooling processes underlie the learning of facial identity and handwriting style: Evidence from typical observers and individuals with Autism.

    PubMed

    Ipser, Alberta; Ring, Melanie; Murphy, Jennifer; Gaigg, Sebastian B; Cook, Richard

    2016-05-01

    Considerable research has addressed whether the cognitive and neural representations recruited by faces are similar to those engaged by other types of visual stimuli. For example, research has examined the extent to which objects of expertise recruit holistic representation and engage the fusiform face area. Little is known, however, about the domain-specificity of the exemplar pooling processes thought to underlie the acquisition of familiarity with particular facial identities. In the present study we sought to compare observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars. Crucially, while handwritten words and faces differ considerably in their topographic form, both learning tasks share a common exemplar pooling component. In our first experiment, we find that typical observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars correlates closely. In our second experiment, we show that observers with Autism Spectrum Disorder (ASD) are impaired at both learning tasks. Our findings suggest that similar exemplar pooling processes are recruited when learning facial identities and handwriting styles. Models of exemplar pooling originally developed to explain face learning, may therefore offer valuable insights into exemplar pooling across a range of domains, extending beyond faces. Aberrant exemplar pooling, possibly resulting from structural differences in the inferior longitudinal fasciculus, may underlie difficulties recognising familiar faces often experienced by individuals with ASD, and leave observers overly reliant on local details present in particular exemplars. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study.

    PubMed

    Mavratzakis, Aimee; Herbert, Cornelia; Walla, Peter

    2016-01-01

    In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250ms) than for scenes (500ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Non-invasive stimulation of the vibrissal pad improves recovery of whisking function after simultaneous lesion of the facial and infraorbital nerves in rats.

    PubMed

    Bendella, H; Pavlov, S P; Grosheva, M; Irintchev, A; Angelova, S K; Merkel, D; Sinis, N; Kaidoglou, K; Skouras, E; Dunlop, S A; Angelov, Doychin N

    2011-07-01

    We have recently shown that manual stimulation of target muscles promotes functional recovery after transection and surgical repair to pure motor nerves (facial: whisking and blink reflex; hypoglossal: tongue position). However, following facial nerve repair, manual stimulation is detrimental if sensory afferent input is eliminated by, e.g., infraorbital nerve extirpation. To further understand the interplay between sensory input and motor recovery, we performed simultaneous cut-and-suture lesions on both the facial and the infraorbital nerves and examined whether stimulation of the sensory afferents from the vibrissae by a forced use would improve motor recovery. The efficacy of 3 treatment paradigms was assessed: removal of the contralateral vibrissae to ensure a maximal use of the ipsilateral ones (vibrissal stimulation; Group 2), manual stimulation of the ipsilateral vibrissal muscles (Group 3), and vibrissal stimulation followed by manual stimulation (Group 4). Data were compared to controls which underwent surgery but did not receive any treatment (Group 1). Four months after surgery, all three treatments significantly improved the amplitude of vibrissal whisking to 30° versus 11° in the controls of Group 1. The three treatments also reduced the degree of polyneuronal innervation of target muscle fibers to 37% versus 58% in Group 1. These findings indicate that forced vibrissal use and manual stimulation, either alone or sequentially, reduce target muscle polyinnervation and improve recovery of whisking function when both the sensory and the motor components of the trigemino-facial system regenerate.

  14. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging.

    PubMed

    Hsu, Vivian M; Wes, Ari M; Tahiri, Youssef; Cornman-Homonoff, Joshua; Percec, Ivona

    2014-09-01

    The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  15. Derivation of simple rules for complex flow vector fields on the lower part of the human face for robot face design.

    PubMed

    Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru

    2017-11-27

    It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.

  16. Image-Based 3D Face Modeling System

    NASA Astrophysics Data System (ADS)

    Park, In Kyu; Zhang, Hui; Vezhnevets, Vladimir

    2005-12-01

    This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2[InlineEquation not available: see fulltext.]3 minutes.

  17. Strength Is in Numbers: Can Concordant Artificial Listeners Improve Prediction of Emotion from Speech?

    PubMed

    Martinelli, Eugenio; Mencattini, Arianna; Daprati, Elena; Di Natale, Corrado

    2016-01-01

    Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fast and reliable applications for emotion recognition are the obvious advancement of present 'intelligent personal assistants', and may have countless applications in diagnostics, rehabilitation and research. Taking inspiration from the dynamics of human group decision-making, we devised a novel speech emotion recognition system that applies, for the first time, a semi-supervised prediction model based on consensus. Three tests were carried out to compare this algorithm with traditional approaches. Labeling performances relative to a public database of spontaneous speeches are reported. The novel system appears to be fast, robust and less computationally demanding than traditional methods, allowing for easier implementation in portable voice-analyzers (as used in rehabilitation, research, industry, etc.) and for applications in the research domain (such as real-time pairing of stimuli to participants' emotional state, selective/differential data collection based on emotional content, etc.).

  18. Respiratory motion correction in dynamic MRI using robust data decomposition registration - application to DCE-MRI.

    PubMed

    Hamy, Valentin; Dikaios, Nikolaos; Punwani, Shonit; Melbourne, Andrew; Latifoltojar, Arash; Makanyanga, Jesica; Chouhan, Manil; Helbren, Emma; Menys, Alex; Taylor, Stuart; Atkinson, David

    2014-02-01

    Motion correction in Dynamic Contrast Enhanced (DCE-) MRI is challenging because rapid intensity changes can compromise common (intensity based) registration algorithms. In this study we introduce a novel registration technique based on robust principal component analysis (RPCA) to decompose a given time-series into a low rank and a sparse component. This allows robust separation of motion components that can be registered, from intensity variations that are left unchanged. This Robust Data Decomposition Registration (RDDR) is demonstrated on both simulated and a wide range of clinical data. Robustness to different types of motion and breathing choices during acquisition is demonstrated for a variety of imaged organs including liver, small bowel and prostate. The analysis of clinically relevant regions of interest showed both a decrease of error (15-62% reduction following registration) in tissue time-intensity curves and improved areas under the curve (AUC60) at early enhancement. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  19. The use of three-dimensional imaging to evaluate the effect of conventional orthodontic approach in treating a subject with facial asymmetry

    PubMed Central

    Kheir, Nadia Abou; Kau, Chung How

    2016-01-01

    The growth of the craniofacial skeleton takes place from the 3rd week of intra-uterine life until 18 years of age. During this period, the craniofacial complex is affected by extrinsic and intrinsic factors which guide or alter the pattern of growth. Asymmetry can be encountered due to these multifactorial effects or as the normal divergence of the hemifacial counterpart occurs. At present, an orthodontist plays a major role not only in diagnosing dental asymmetry but also facial asymmetry. However, an orthodontist's role in treating or camouflaging the asymmetry can be limited due to the severity. The aim of this research is to report a technique for facial three-dimensional (3D) analysis used to measure the progress of nonsurgical orthodontic treatment approach for a subject with maxillary asymmetry combined with mandibular angular asymmetry. The facial analysis was composed of five parts: Upper face asymmetry analysis, maxillary analysis, maxillary cant analysis, mandibular cant analysis, and mandibular asymmetry analysis which were applied using 3D software InVivoDental 5.2.3 (Anatomage Company, San Jose, CA, USA). The five components of the facial analysis were applied in the initial cone-beam computed tomography (T1) for diagnosis. Maxillary analysis, maxillary cant analysis, and mandibular cant analysis were applied to measure the progress of the orthodontics treatment (T2). Twenty-two linear measurements bilaterally and sixteen angular criteria were used to analyze the facial structures using different anthropometric landmarks. Only angular mandibular asymmetry was reported. However, the subject had maxillary alveolar ridge cant of 9.96°and dental maxillary cant was 2.95° in T1. The mandibular alveolar ridge cant was 7.41° and the mandibular dental cant was 8.39°. Highest decrease in the cant was reported maxillary alveolar ridge around 2.35° and in the mandibular alveolar ridge around 3.96° in T2. Facial 3D analysis is considered a useful adjunct in evaluating inter-arch biomechanics. PMID:27563618

  20. Middle and inner ear malformations in mutation-proven branchio-oculo-facial (BOF) syndrome: case series and review of the literature.

    PubMed

    Carter, Melissa T; Blaser, Susan; Papsin, Blake; Meschino, Wendy; Reardon, Willie; Klatt, Regan; Babul-Hirji, Riyana; Milunsky, Jeff; Chitayat, David

    2012-08-01

    Hearing impairment is common in individuals with branchio-oculo-facial (BOF) syndrome. The majority of described individuals have conductive hearing impairment due to malformed ossicles and/or external canal stenosis or atresia, although a sensorineural component to the hearing impairment in BOF syndrome is increasingly being reported. Sophisticated computed tomography (CT) of the temporal bone has revealed middle and inner ear malformations in three previous reports. We present middle and inner ear abnormalities in three additional individuals with mutation-proven BOF syndrome. We suggest that temporal bone CT imaging be included in the medical workup of a child with BOF syndrome, in order to guide management. Copyright © 2012 Wiley Periodicals, Inc.

  1. Implicit Binding of Facial Features During Change Blindness

    PubMed Central

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  2. Implicit binding of facial features during change blindness.

    PubMed

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  3. Impact of Social Cognition on Alcohol Dependence Treatment Outcome: Poorer Facial Emotion Recognition Predicts Relapse/Dropout.

    PubMed

    Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang

    2017-12-01

    Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of treatment. Impaired facial emotion recognition represents a neurocognitive risk factor that should be taken into account in alcohol dependence treatment. Treatments targeting the improvement of these social cognition deficits in AUD may offer a promising future approach. Copyright © 2017 by the Research Society on Alcoholism.

  4. The effect of Ramadan fasting on spatial attention through emotional stimuli

    PubMed Central

    Molavi, Maziyar; Yunus, Jasmy; Utama, Nugraha P

    2016-01-01

    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities – before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing. PMID:27307772

  5. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  6. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    PubMed

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  7. Anatomical evidence regarding the existence of sustentaculum facies.

    PubMed

    Frâncu, L L; Hînganu, Delia; Hînganu, M V

    2013-01-01

    The face, seen as a unitary region is subject to the gravitational force. Since it is the main relational and socialization region of each individual, it presents unique ways of suspension. The elevation system of the face is complex, and it includes four different elements: the continuity with the epicranial fascia, the adhesion of superficial structures to the peri- and inter-orbital mimic muscles, ligaments adhesions and fixing ligaments of the superficial layers to the zygomatic process, and also to the facial fat pad. Each of these four elements were evaluated on 12 cephalic extremities, dissected in detail, layer by layer, and the images were captured with an informatics system connected to an operating microscope. The purchased mesoscopic images revealed the presence of a superficial musculo-aponeurotic system (SMAS) through which the anti-gravity suspension of the superficial facial structures become possible. This system acts against face aging and all four elevation structures form what the so-called sustentaculum facies. The participation of each of the four anatomic components and their approach in the facial rejuvenation surgeries are here in discussion.

  8. Face aging effect simulation model based on multilayer representation and shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Yuancheng; Li, Yan

    2017-09-01

    In order to extract detailed facial features, we build a face aging effect simulation model based on multilayer representation and shearlet transform. The face is divided into three layers: the global layer of the face, the local features layer, and texture layer, which separately establishes the aging model. First, the training samples are classified according to different age groups, and we use active appearance model (AAM) at the global level to obtain facial features. The regression equations of shape and texture with age are obtained by fitting the support vector machine regression, which is based on the radial basis function. We use AAM to simulate the aging of facial organs. Then, for the texture detail layer, we acquire the significant high-frequency characteristic components of the face by using the multiscale shearlet transform. Finally, we get the last simulated aging images of the human face by the fusion algorithm. Experiments are carried out on the FG-NET dataset, and the experimental results show that the simulated face images have less differences from the original image and have a good face aging simulation effect.

  9. Activity in the human brain predicting differential heart rate responses to emotional facial expressions.

    PubMed

    Critchley, Hugo D; Rotshtein, Pia; Nagai, Yoko; O'Doherty, John; Mathias, Christopher J; Dolan, Raymond J

    2005-02-01

    The James-Lange theory of emotion proposes that automatically generated bodily reactions not only color subjective emotional experience of stimuli, but also necessitate a mechanism by which these bodily reactions are differentially generated to reflect stimulus quality. To examine this putative mechanism, we simultaneously measured brain activity and heart rate to identify regions where neural activity predicted the magnitude of heart rate responses to emotional facial expressions. Using a forewarned reaction time task, we showed that orienting heart rate acceleration to emotional face stimuli was modulated as a function of the emotion depicted. The magnitude of evoked heart rate increase, both across the stimulus set and within each emotion category, was predicted by level of activity within a matrix of interconnected brain regions, including amygdala, insula, anterior cingulate, and brainstem. We suggest that these regions provide a substrate for translating visual perception of emotional facial expression into differential cardiac responses and thereby represent an interface for selective generation of visceral reactions that contribute to the embodied component of emotional reaction.

  10. Facial Fractures: Pearls and Perspectives.

    PubMed

    Chaudhry, Obaid; Isakson, Matthew; Franklin, Adam; Maqusi, Suhair; El Amm, Christian

    2018-05-01

    After studying this article, the participant should be able to: 1. Describe the A-frame configuration of anterior facial buttresses, recognize the importance of restoring anterior projection in frontal sinus fractures, and describe an alternative design and donor site of pericranial flaps in frontal sinus fractures. 2. Describe the symptoms and cause of pseudo-Brown syndrome, describe the anatomy and placement of a buttress-spanning plate in nasoorbitoethmoid fractures, and identify appropriate nasal support alternatives for nasoorbitoethmoid fractures. 3. Describe the benefits and disadvantages of different lower lid approaches to the orbital floor and inferior rim, identify late exophthalmos as a complication of reconstructing the orbital floor with nonporous alloplast, and select implant type and size for correction of secondary enophthalmos. 4. Describe closed reduction of low-energy zygomatic body fractures with the Gillies approach and identify situations where internal fixation may be unnecessary, identify situations where plating the inferior orbital rim may be avoided, and select fixation points for osteosynthesis of uncomplicated displaced zygomatic fractures. 5. Understand indications and complications of use for intermaxillary screw systems, understand sequencing panfacial fractures, describe the sulcular approach to mandible fractures, and describe principles and techniques of facial reconstruction after self-inflicted firearm injuries. Treating patients with facial trauma remains a core component of plastic surgery and a significant part of the value of a plastic surgeon to a health system.

  11. The neural correlates of internal and external comparisons: an fMRI study.

    PubMed

    Wen, Xue; Xiang, Yanhui; Cant, Jonathan S; Wang, Tingting; Cupchik, Gerald; Huang, Ruiwang; Mo, Lei

    2017-01-01

    Many previous studies have suggested that various comparisons rely on the same cognitive and neural mechanisms. However, little attention has been paid to exploring the commonalities and differences between the internal comparison based on concepts or rules and the external comparison based on perception. In the present experiment, moral beauty comparison and facial beauty comparison were selected as the representatives of internal comparison and external comparison, respectively. Functional magnetic resonance imaging (fMRI) was used to record brain activity while participants compared the level of moral beauty of two scene drawings containing moral acts or the level of facial beauty of two face photos. In addition, a physical size comparison task with the same stimuli as the beauty comparison was included. We observed that both the internal moral beauty comparison and external facial beauty comparison obeyed a typical distance effect and this behavioral effect recruited a common frontoparietal network involved in comparisons of simple physical magnitudes such as size. In addition, compared to external facial beauty comparison, internal moral beauty comparison induced greater activity in more advanced and complex cortical regions, such as the bilateral middle temporal gyrus and middle occipital gyrus, but weaker activity in the putamen, a subcortical region. Our results provide novel neural evidence for the comparative process and suggest that different comparisons may rely on both common cognitive processes as well as distinct and specific cognitive components.

  12. Effects of facial attractiveness on personality stimuli in an implicit priming task: an ERP study.

    PubMed

    Zhang, Yan; Zheng, Minxiao; Wang, Xiaoying

    2016-08-01

    Using event-related potentials (ERPs) in a priming paradigm, this study examines implicit priming in the association of personality words with facial attractiveness. A total of 16 participants (8 males and 8 females; age range, 19-24 years; mean age, 21.30 years) were asked to judge the color (red and green) of positive or negative personality words after exposure to priming stimuli (attractive and unattractive facial images). The positive personality words primed by attractive faces or the negative personality words primed by unattractive faces were defined as congruent trials, whereas the positive personality words primed by unattractive faces or the negative personality words primed by attractive faces were defined as incongruent trials. Behavioral results showed that compared with the unattractive faces trials, the trials that attractive faces being the priming stimuli had longer reaction times and higher accuracy rates. Moreover, a more negative ERP deflection (N2) component was observed in the ERPs of the incongruent condition than in the ERPs of the congruent condition. In addition, the personality words presented after the attractive faces elicited larger amplitudes from the frontal region to the central region (P2 and P350-550 ms) compared with the personality words after unattractive faces as priming stimuli. The study provides evidence for the facial attractiveness stereotype ('What is beautiful is good') through an implicit priming task.

  13. External auditory canal cholesteatoma and keratosis obturans: the role of imaging in preventing facial nerve injury.

    PubMed

    McCoul, Edward D; Hanson, Matthew B

    2011-12-01

    We conducted a retrospective study to compare the clinical characteristics of external auditory canal cholesteatoma (EACC) with those of a similar entity, keratosis obturans (KO). We also sought to identify those aspects of each disease that may lead to complications. We identified 6 patients in each group. Imaging studies were reviewed for evidence of bony erosion and the proximity of disease to vital structures. All 6 patients in the EACC group had their diagnosis confirmed by computed tomography (CT), which demonstrated widening of the bony external auditory canal; 4 of these patients had critical erosion of bone adjacent to the facial nerve. Of the 6 patients with KO, only 2 had undergone CT, and neither exhibited any significant bony erosion or expansion; 1 of them developed osteomyelitis of the temporal bone and adjacent temporomandibular joint. Another patient manifested KO as part of a dermatophytid reaction. The essential component of treatment in all cases of EACC was microscopic debridement of the ear canal. We conclude that EACC may produce significant erosion of bone with exposure of vital structures, including the facial nerve. Because of the clinical similarity of EACC to KO, misdiagnosis is possible. Temporal bone imaging should be obtained prior to attempts at debridement of suspected EACC. Increased awareness of these uncommon conditions is warranted to prompt appropriate investigation and prevent iatrogenic complications such as facial nerve injury.

  14. Effect of empathy trait on attention to various facial expressions: evidence from N170 and late positive potential (LPP)

    PubMed Central

    2014-01-01

    Background The present study sought to clarify the relationship between empathy trait and attention responses to happy, angry, surprised, afraid, and sad facial expressions. As indices of attention, we recorded event-related potentials (ERP) and focused on N170 and late positive potential (LPP) components. Methods Twenty-two participants (12 males, 10 females) discriminated facial expressions (happy, angry, surprised, afraid, and sad) from emotionally neutral faces under an oddball paradigm. The empathy trait of participants was measured using the Interpersonal Reactivity Index (IRI, J Pers Soc Psychol 44:113–126, 1983). Results Participants with higher IRI scores showed: 1) more negative amplitude of N170 (140 to 200 ms) in the right posterior temporal area elicited by happy, angry, surprised, and afraid faces; 2) more positive amplitude of early LPP (300 to 600 ms) in the parietal area elicited in response to angry and afraid faces; and 3) more positive amplitude of late LPP (600 to 800 ms) in the frontal area elicited in response to happy, angry, surprised, afraid, and sad faces, compared to participants with lower IRI scores. Conclusions These results suggest that individuals with high empathy pay attention to various facial expressions more than those with low empathy, from very-early stage (reflected in N170) to late-stage (reflected in LPP) processing of faces. PMID:24975115

  15. Automatic mimicry reactions as related to differences in emotional empathy.

    PubMed

    Sonnby-Borgström, Marianne

    2002-12-01

    The hypotheses of this investigation were derived by conceiving of automatic mimicking as a component of emotional empathy. Differences between subjects high and low in emotional empathy were investigated. The parameters compared were facial mimicry reactions, as represented by electromyographic (EMG) activity when subjects were exposed to pictures of angry or happy faces, and the degree of correspondence between subjects' facial EMG reactions and their self-reported feelings. The comparisons were made at different stimulus exposure times in order to elicit reactions at different levels of information processing. The high-empathy subjects were found to have a higher degree of mimicking behavior than the low-empathy subjects, a difference that emerged at short exposure times (17-40 ms) that represented automatic reactions. The low-empathy subjects tended already at short exposure times (17-40 ms) to show inverse zygomaticus muscle reactions, namely "smiling" when exposed to an angry face. The high-empathy group was characterized by a significantly higher correspondence between facial expressions and self-reported feelings. No differences were found between the high- and low-empathy subjects in their verbally reported feelings when presented a happy or an angry face. Thus, the differences between the groups in emotional empathy appeared to be related to differences in automatic somatic reactions to facial stimuli rather than to differences in their conscious interpretation of the emotional situation.

  16. Physiology-based face recognition in the thermal infrared spectrum.

    PubMed

    Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike

    2007-04-01

    The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area.

  17. Principal component analysis of the Norwegian version of the quality of life in late-stage dementia scale.

    PubMed

    Mjørud, Marit; Kirkevold, Marit; Røsvik, Janne; Engedal, Knut

    2014-01-01

    To investigate which factors the Quality of Life in Late-Stage Dementia (QUALID) scale holds when used among people with dementia (pwd) in nursing homes and to find out how the symptom load varies across the different severity levels of dementia. We included 661 pwd [mean age ± SD, 85.3 ± 8.6 years; 71.4% women]. The QUALID and the Clinical Dementia Rating (CDR) scale were applied. A principal component analysis (PCA) with varimax rotation and Kaiser normalization was applied to test the factor structure. Nonparametric analyses were applied to examine differences of symptom load across the three CDR groups. The mean QUALID score was 21.5 (±7.1), and the CDR scores of the three groups were 1 in 22.5%, 2 in 33.6% and 3 in 43.9%. The results of the statistical measures employed were the following: Crohnbach's α of QUALID, 0.74; Bartlett's test of sphericity, p <0.001; the Kaiser-Meyer-Olkin measure, 0.77. The PCA analysis resulted in three components accounting for 53% of the variance. The first component was 'tension' ('facial expression of discomfort', 'appears physically uncomfortable', 'verbalization suggests discomfort', 'being irritable and aggressive', 'appears calm', Crohnbach's α = 0.69), the second was 'well-being' ('smiles', 'enjoys eating', 'enjoys touching/being touched', 'enjoys social interaction', Crohnbach's α = 0.62) and the third was 'sadness' ('appears sad', 'cries', 'facial expression of discomfort', Crohnbach's α 0.65). The mean score on the components 'tension' and 'well-being' increased significantly with increasing severity levels of dementia. Three components of quality of life (qol) were identified. Qol decreased with increasing severity of dementia. © 2013 S. Karger AG, Basel.

  18. Influence of Objective Three-Dimensional Measures and Movement Images on Surgeon Treatment Planning for Lip Revision Surgery

    PubMed Central

    Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.

    2013-01-01

    Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676

  19. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging

    PubMed Central

    Hsu, Vivian M.; Wes, Ari M.; Tahiri, Youssef; Cornman-Homonoff, Joshua

    2014-01-01

    Background: The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Methods: Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. Results: The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). Conclusions: This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time. PMID:25426394

  20. Emotion recognition impairment and apathy after subthalamic nucleus stimulation in Parkinson's disease have separate neural substrates.

    PubMed

    Drapier, D; Péron, J; Leray, E; Sauleau, P; Biseul, I; Drapier, S; Le Jeune, F; Travers, D; Bourguignon, A; Haegelen, C; Millet, B; Vérin, M

    2008-09-01

    To test the hypothesis that emotion recognition and apathy share the same functional circuit involving the subthalamic nucleus (STN). A consecutive series of 17 patients with advanced Parkinson's disease (PD) was assessed 3 months before (M-3) and 3 months (M+3) after STN deep brain stimulation (DBS). Mean (+/-S.D.) age at surgery was 56.9 (8.7) years. Mean disease duration at surgery was 11.8 (2.6) years. Apathy was measured using the Apathy Evaluation Scale (AES) at both M-3 and M3. Patients were also assessed using a computerised paradigm of facial emotion recognition [Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto: Consulting Psychologist Press] before and after STN DBS. Prior to this, the Benton Facial Recognition Test was used to check that the ability to perceive faces was intact. Apathy had significantly worsened at M3 (42.5+/-8.9, p=0.006) after STN-DBS, in relation to the preoperative assessment (37.2+/-5.5). There was also a significant reduction in recognition percentages for facial expressions of fear (43.1%+/-22.9 vs. 61.6%+/-21.4, p=0.022) and sadness (52.7%+/-19.1 vs. 67.6%+/-22.8, p=0.031) after STN DBS. However, the postoperative worsening of apathy and emotion recognition impairment were not correlated. Our results confirm that the STN is involved in both the apathy and emotion recognition networks. However, the absence of any correlation between apathy and emotion recognition impairment suggests that the worsening of apathy following surgery could not be explained by a lack of facial emotion recognition and that its behavioural and cognitive components should therefore also be taken into consideration.

  1. Facial growth and development in unilateral cleft lip and palate from the time of palatoplasty to the onset of puberty: a longitudinal study.

    PubMed

    Smahel, Z; Müllerová, Z

    1995-01-01

    X-ray cephalometry was used for the assessment of facial growth and development from the time of palate surgery to the onset of puberty (from 5 to 11 years) in 24 boys with unilateral cleft lip and palate treated with primary periosteoplasty (at 8 months) and palatal pushback supplemented by pharyngeal flap surgery (at 5 years). The lowest growth showed the depth of the maxilla and the height of the upper lip. An increasing protrusion of the mandible and in particular the increasing retrusion of the maxilla resulted in a flattening of the face and in an impairment of sagittal jaw relations. However, it was possible to attain an improvement of overjet produced by a substantial increase of the proclination of upper incisors and of the alveolar process. There was a deterioration of the prominence of the upper lip. Anterior growth rotation was absent during the development of the face, though a rotation in both directions was quite common in individual cases. The steepness of the mandibular body, vertical jaw relations, and facial vertical proportions remained unchanged. As compared to the pubertal period, the growth and development differed only by a more marked proclination of the dentoalveolar component of the maxilla and by an improvement of overjet. Facial convexity and sagittal jaw relations deteriorated in more than 90% of the patients, the overjet only in 20%, yet the prominence of the lip in 70%. Facial convexity and sagittal jaw relations were not correlated with mandibular rotation but they affected the overjet and the prominence of the upper lip.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology.

    PubMed

    Pesciarelli, Francesca; Leo, Irene; Sarlo, Michela

    2016-01-01

    The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces.

  3. Implicit Processing of the Eyes and Mouth: Evidence from Human Electrophysiology

    PubMed Central

    Pesciarelli, Francesca; Leo, Irene; Sarlo, Michela

    2016-01-01

    The current study examined the time course of implicit processing of distinct facial features and the associate event-related potential (ERP) components. To this end, we used a masked priming paradigm to investigate implicit processing of the eyes and mouth in upright and inverted faces, using a prime duration of 33 ms. Two types of prime-target pairs were used: 1. congruent (e.g., open eyes only in both prime and target or open mouth only in both prime and target); 2. incongruent (e.g., open mouth only in prime and open eyes only in target or open eyes only in prime and open mouth only in target). The identity of the faces changed between prime and target. Participants pressed a button when the target face had the eyes open and another button when the target face had the mouth open. The behavioral results showed faster RTs for the eyes in upright faces than the eyes in inverted faces, the mouth in upright and inverted faces. Moreover they also revealed a congruent priming effect for the mouth in upright faces. The ERP findings showed a face orientation effect across all ERP components studied (P1, N1, N170, P2, N2, P3) starting at about 80 ms, and a congruency/priming effect on late components (P2, N2, P3), starting at about 150 ms. Crucially, the results showed that the orientation effect was driven by the eye region (N170, P2) and that the congruency effect started earlier (P2) for the eyes than for the mouth (N2). These findings mark the time course of the processing of internal facial features and provide further evidence that the eyes are automatically processed and that they are very salient facial features that strongly affect the amplitude, latency, and distribution of neural responses to faces. PMID:26790153

  4. Time course of implicit processing and explicit processing of emotional faces and emotional words.

    PubMed

    Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred

    2011-05-01

    Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Art critic: Multisignal vision and speech interaction system in a gaming context.

    PubMed

    Reale, Michael J; Liu, Peng; Yin, Lijun; Canavan, Shaun

    2013-12-01

    True immersion of a player within a game can only occur when the world simulated looks and behaves as close to reality as possible. This implies that the game must correctly read and understand, among other things, the player's focus, attitude toward the objects/persons in focus, gestures, and speech. In this paper, we proposed a novel system that integrates eye gaze estimation, head pose estimation, facial expression recognition, speech recognition, and text-to-speech components for use in real-time games. Both the eye gaze and head pose components utilize underlying 3-D models, and our novel head pose estimation algorithm uniquely combines scene flow with a generic head model. The facial expression recognition module uses the local binary patterns with three orthogonal planes approach on the 2-D shape index domain rather than the pixel domain, resulting in improved classification. Our system has also been extended to use a pan-tilt-zoom camera driven by the Kinect, allowing us to track a moving player. A test game, Art Critic, is also presented, which not only demonstrates the utility of our system but also provides a template for player/non-player character (NPC) interaction in a gaming context. The player alters his/her view of the 3-D world using head pose, looks at paintings/NPCs using eye gaze, and makes an evaluation based on the player's expression and speech. The NPC artist will respond with facial expression and synthetic speech based on its personality. Both qualitative and quantitative evaluations of the system are performed to illustrate the system's effectiveness.

  6. Phenotypic Robustness and the Assortativity Signature of Human Transcription Factor Networks

    PubMed Central

    Pechenick, Dov A.; Payne, Joshua L.; Moore, Jason H.

    2014-01-01

    Many developmental, physiological, and behavioral processes depend on the precise expression of genes in space and time. Such spatiotemporal gene expression phenotypes arise from the binding of sequence-specific transcription factors (TFs) to DNA, and from the regulation of nearby genes that such binding causes. These nearby genes may themselves encode TFs, giving rise to a transcription factor network (TFN), wherein nodes represent TFs and directed edges denote regulatory interactions between TFs. Computational studies have linked several topological properties of TFNs — such as their degree distribution — with the robustness of a TFN's gene expression phenotype to genetic and environmental perturbation. Another important topological property is assortativity, which measures the tendency of nodes with similar numbers of edges to connect. In directed networks, assortativity comprises four distinct components that collectively form an assortativity signature. We know very little about how a TFN's assortativity signature affects the robustness of its gene expression phenotype to perturbation. While recent theoretical results suggest that increasing one specific component of a TFN's assortativity signature leads to increased phenotypic robustness, the biological context of this finding is currently limited because the assortativity signatures of real-world TFNs have not been characterized. It is therefore unclear whether these earlier theoretical findings are biologically relevant. Moreover, it is not known how the other three components of the assortativity signature contribute to the phenotypic robustness of TFNs. Here, we use publicly available DNaseI-seq data to measure the assortativity signatures of genome-wide TFNs in 41 distinct human cell and tissue types. We find that all TFNs share a common assortativity signature and that this signature confers phenotypic robustness to model TFNs. Lastly, we determine the extent to which each of the four components of the assortativity signature contributes to this robustness. PMID:25121490

  7. Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.

    PubMed

    Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios

    2017-03-01

    Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.

  8. Comparison of fingerprint and facial biometric verification technologies for user access and patient identification in a clinical environment

    NASA Astrophysics Data System (ADS)

    Guo, Bing; Zhang, Yu; Documet, Jorge; Liu, Brent; Lee, Jasper; Shrestha, Rasu; Wang, Kevin; Huang, H. K.

    2007-03-01

    As clinical imaging and informatics systems continue to integrate the healthcare enterprise, the need to prevent patient mis-identification and unauthorized access to clinical data becomes more apparent especially under the Health Insurance Portability and Accountability Act (HIPAA) mandate. Last year, we presented a system to track and verify patients and staff within a clinical environment. This year, we further address the biometric verification component in order to determine which Biometric system is the optimal solution for given applications in the complex clinical environment. We install two biometric identification systems including fingerprint and facial recognition systems at an outpatient imaging facility, Healthcare Consultation Center II (HCCII). We evaluated each solution and documented the advantages and pitfalls of each biometric technology in this clinical environment.

  9. Multilevel robustness

    NASA Astrophysics Data System (ADS)

    Girard, Henri-Louis; Khan, Sami; Varanasi, Kripa K.

    2018-03-01

    A combination of hard, soft and nanoscale organic components results in robust superhydrophobic surfaces that can withstand mechanical abrasion and chemical oxidation, and exhibit excellent substrate adhesion.

  10. Comparison of Neurovascular Characteristics of Facial Skin in Patients After Primary and Revision Rhytidectomies.

    PubMed

    Ardeshirpour, Farhad; Hurliman, Elisabeth; Wendelschafer-Crabb, Gwen; McAdams, Brian; Hilger, Peter A; Kennedy, William R; Lassig, Amy Anne D; Brenner, Michael J

    2017-09-01

    Wound healing influences both the cosmetic and functional outcomes of facial surgery. Study of cutaneous innervation may afford insight into patients' preoperative wound healing potential and aid in their selection of appropriate surgical procedures. To present the quantitative and qualitative differences of epidermal nerve fibers (ENFs), neurotransmitters, vasculature, and mast cells in facial skin among patients after primary and revision rhytidectomies. This pilot study collected cutaneous specimens from 8 female patients aged 42 to 66 years who underwent primary rhytidectomy (n = 5) and revision rhytidectomy (n = 3) at Centennial Lakes Surgery Center, Edina, Minnesota, from July 2010 to March 2014. Tissue was processed for confocal/epifluorescence microscopy and indirect immunofluorescent localization of several neural and tissue antigens as well as basement membrane and mast cell markers. Primary rhytidectomy vs revision rhytidectomy with selection of a small area of redundant, otherwise disposed of tissue anterior to the tragus for ENF study. Demographic characteristics included smoking status; 10-point rating scales for facial sensation, pain, and paresthesias; and confocal/epifluorescence microscopy to quantify ENFs, neurotransmitters, vasculature, and mast cells. Patients in the primary rhytidectomy group had a mean (SD) of 54.4 (31.6) ENFs/mm (range, 14.2-99.2 ENFs/mm), and those in the revision rhytidectomy group had a mean (SD) of 18.6 (5.8) ENFs/mm (range, 13.8-25.0 ENFs/mm). A patient in the primary rhytidectomy group was a 25-pack-year smoker and had 14.2 ENFs/mm, the lowest in both groups. In addition to these structural neural changes, functional neural changes in revision rhytidectomy samples included qualitative changes in normal neural antigen prevalence (substance P, calcitonin gene-related peptide, and vasoactive intestinal peptide). Capillary loops appeared less robust and were less common in dermal papilla among samples from both the primary and revision groups, and mast cells were more degranulated. No differences were found in subjective, self-reported postoperative facial sensation. Previous skin elevation was associated with decreased epidermal nerve fiber density and qualitative changes in dermal nerves, capillaries, and mast cells in a clinical sample of patients undergoing rhytidectomy. Future research is needed to determine whether histological findings predict wound healing and to better understand the effects of surgery on regenerative capacity of epidermal nerve fibers. NA.

  11. Robust emergence of a topological Hall effect in MnGa/heavy metal bilayers

    NASA Astrophysics Data System (ADS)

    Meng, K. K.; Zhao, X. P.; Liu, P. F.; Liu, Q.; Wu, Y.; Li, Z. P.; Chen, J. K.; Miao, J.; Xu, X. G.; Zhao, J. H.; Jiang, Y.

    2018-02-01

    We have investigated the topological Hall effect (THE) in MnGa/Pt and MnGa/Ta bilayers induced by the inter- facial Dzyaloshinskii-Moriya interaction (DMI). By varying the growth parameters, we can modulate the domain wall energy, and the largest THE signals are found when the domain wall energy is the smallest. The large topological portion of the Hall signal from the total Hall signal has been extracted in the whole temperature range from 5 to 300 K. These results open up the exploration of the DMI induced magnetic behavior based on the bulk perpendicular magnetic anisotropy materials for fundamental physics and magnetic storage technologies.

  12. Review: Deciphering animal robustness. A synthesis to facilitate its use in livestock breeding and management.

    PubMed

    Friggens, N C; Blanc, F; Berry, D P; Puillet, L

    2017-12-01

    As the environments in which livestock are reared become more variable, animal robustness becomes an increasingly valuable attribute. Consequently, there is increasing focus on managing and breeding for it. However, robustness is a difficult phenotype to properly characterise because it is a complex trait composed of multiple components, including dynamic elements such as the rates of response to, and recovery from, environmental perturbations. In this review, the following definition of robustness is used: the ability, in the face of environmental constraints, to carry on doing the various things that the animal needs to do to favour its future ability to reproduce. The different elements of this definition are discussed to provide a clearer understanding of the components of robustness. The implications for quantifying robustness are that there is no single measure of robustness but rather that it is the combination of multiple and interacting component mechanisms whose relative value is context dependent. This context encompasses both the prevailing environment and the prevailing selection pressure. One key issue for measuring robustness is to be clear on the use to which the robustness measurements will employed. If the purpose is to identify biomarkers that may be useful for molecular phenotyping or genotyping, the measurements should focus on the physiological mechanisms underlying robustness. However, if the purpose of measuring robustness is to quantify the extent to which animals can adapt to limiting conditions then the measurements should focus on the life functions, the trade-offs between them and the animal's capacity to increase resource acquisition. The time-related aspect of robustness also has important implications. Single time-point measurements are of limited value because they do not permit measurement of responses to (and recovery from) environmental perturbations. The exception being single measurements of the accumulated consequence of a good (or bad) adaptive capacity, such as productive longevity and lifetime efficiency. In contrast, repeated measurements over time have a high potential for quantification of the animal's ability to cope with environmental challenges. Thus, we should be able to quantify differences in adaptive capacity from the data that are increasingly becoming available with the deployment of automated monitoring technology on farm. The challenge for future management and breeding will be how to combine various proxy measures to obtain reliable estimates of robustness components in large populations. A key aspect for achieving this is to define phenotypes from consideration of their biological properties and not just from available measures.

  13. Bell's Palsy.

    PubMed

    Reich, Stephen G

    2017-04-01

    Bell's palsy is a common outpatient problem, and while the diagnosis is usually straightforward, a number of diagnostic pitfalls can occur, and a lengthy differential diagnosis exists. Recognition and management of Bell's palsy relies on knowledge of the anatomy and function of the various motor and nonmotor components of the facial nerve. Avoiding diagnostic pitfalls relies on recognizing red flags or features atypical for Bell's palsy, suggesting an alternative cause of peripheral facial palsy. The first American Academy of Neurology (AAN) evidence-based review on the treatment of Bell's palsy in 2001 concluded that corticosteroids were probably effective and that the antiviral acyclovir was possibly effective in increasing the likelihood of a complete recovery from Bell's palsy. Subsequent studies led to a revision of these recommendations in the 2012 evidence-based review, concluding that corticosteroids, when used shortly after the onset of Bell's palsy, were "highly likely" to increase the probability of recovery of facial weakness and should be offered; the addition of an antiviral to steroids may increase the likelihood of recovery but, if so, only by a very modest effect. Bell's palsy is characterized by the spontaneous acute onset of unilateral peripheral facial paresis or palsy in isolation, meaning that no features from the history, neurologic examination, or head and neck examination suggest a specific or alternative cause. In this setting, no further testing is necessary. Even without treatment, the outcome of Bell's palsy is favorable, but treatment with corticosteroids significantly increases the likelihood of improvement.

  14. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    PubMed Central

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  15. Stable vortex-bright-soliton structures in two-component Bose-Einstein condensates.

    PubMed

    Law, K J H; Kevrekidis, P G; Tuckerman, Laurette S

    2010-10-15

    We report the numerical realization of robust two-component structures in 2D and 3D Bose-Einstein condensates with nontrivial topological charge in one component. We identify a stable symbiotic state in which a higher-dimensional bright soliton exists even in a homogeneous setting with defocusing interactions, due to the effective potential created by a stable vortex in the other component. The resulting vortex-bright-solitons, generalizations of the recently experimentally observed dark-bright solitons, are found to be very robust both in the homogeneous medium and in the presence of external confinement.

  16. Strength Is in Numbers: Can Concordant Artificial Listeners Improve Prediction of Emotion from Speech?

    PubMed Central

    Martinelli, Eugenio; Mencattini, Arianna; Di Natale, Corrado

    2016-01-01

    Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fast and reliable applications for emotion recognition are the obvious advancement of present ‘intelligent personal assistants’, and may have countless applications in diagnostics, rehabilitation and research. Taking inspiration from the dynamics of human group decision-making, we devised a novel speech emotion recognition system that applies, for the first time, a semi-supervised prediction model based on consensus. Three tests were carried out to compare this algorithm with traditional approaches. Labeling performances relative to a public database of spontaneous speeches are reported. The novel system appears to be fast, robust and less computationally demanding than traditional methods, allowing for easier implementation in portable voice-analyzers (as used in rehabilitation, research, industry, etc.) and for applications in the research domain (such as real-time pairing of stimuli to participants’ emotional state, selective/differential data collection based on emotional content, etc.). PMID:27563724

  17. Childhood Cumulative Risk Exposure and Adult Amygdala Volume and Function

    PubMed Central

    Evans, Gary W.; Swain, James E.; King, Anthony P.; Wang, Xin; Javanbakht, Arash; Ho, S. Shaun; Angstadt, Michael; Phan, K. Luan; Xie, Hong; Liberzon, Israel

    2015-01-01

    Considerable work indicates that early cumulative risk exposure is aversive to human development, but very little research has examined neurological underpinnings of these robust findings. We investigated amygdala volume and reactivity to facial stimuli among adults (M = 23.7 years, n = 54) as a function of cumulative risk exposure during childhood (ages 9 and 13). In addition, we tested whether expected, cumulative risk elevations in amygdala volume would mediate functional reactivity of the amygdala during socio-emotional processing. Risks included substandard housing quality, noise, crowding, family turmoil, child separation from family, and violence. Total and left hemisphere adult amygdala volumes, respectively were positively related to cumulative risk exposure during childhood. The links between childhood cumulative risk exposure and elevated amygdala responses to emotionally neutral facial stimuli in adulthood were mediated by the respective amygdala volumes. Cumulative risk exposure in later adolescence (17 years), however, was unrelated to subsequent, adult amygdala volume or function. Physical and socioemotional risk exposures early in life appear to alter amygdala development, rendering adults more reactive to ambiguous stimuli such as neutral faces. These stress-related differences in childhood amygdala development might contribute to well-documented psychological distress as a function of early risk exposure. PMID:26469872

  18. A Multidisciplinary Approach to Research in Small-Scale Societies: Studying Emotions and Facial Expressions in the Field

    PubMed Central

    Crivelli, Carlos; Jarillo, Sergio; Fridlund, Alan J.

    2016-01-01

    Although cognitive science was multidisciplinary from the start, an under-emphasis on anthropology has left the field with limited research in small scale, indigenous societies. Neglecting the anthropological perspective is risky, given that once-canonical cognitive science findings have often been shown to be artifacts of enculturation rather than cognitive universals. This imbalance has become more problematic as the increased use of Western theory-driven approaches, many of which assume human uniformity (“universality”), confronts the absence of a robust descriptive base that might provide clarifying or even contrary evidence. We highlight the need for remedies to such shortcomings by suggesting a two-fold methodological shift. First, studies conducted in indigenous societies can benefit by relying on multidisciplinary research groups to diminish ethnocentrism and enhance the quality of the data. Second, studies devised for Western societies can readily be adapted to the changing settings encountered in the field. Here, we provide examples, drawn from the areas of emotion and facial expressions, to illustrate potential solutions to recurrent problems in enhancing the quality of data collection, hypothesis testing, and the interpretation of results. PMID:27486420

  19. Mechanisms of palatal epithelial seam disintegration by Transforming Growth Factor (TGF)-β3

    PubMed Central

    Ahmed, Shaheen; Liu, Chang-Chih; Nawshad, Ali

    2007-01-01

    TGFβ3 signaling initiates and completes sequential phases of cellular differentiation that is required for complete disintegration of the palatal medial edge seam, that progresses between 14 to 17 embryonic days in the murine system, which is necessary in establishing confluence of the palatal stroma. Understanding the cellular mechanism of palatal MES disintegration in response to TGFβ3 signaling will result in new approaches to defining the causes of cleft palate and other facial clefts that may result from failure of seam disintegration. We have isolated MES primary cells to study the details of MES disintegration mechanism by TGFβ3 during palate development using several biochemical and genetic approaches. Our results demonstrate a novel mechanism of MES disintegration where MES, independently yet sequentially, undergoes cell cycle arrest, cell migration and apoptosis to generate immaculate palatal confluency during palatogenesis in response to robust TGFβ3 signaling. The results contribute to a missing fundamental element to our base knowledge of the diverse roles of TGFβ3 in functional and morphological changes that MES undergo during palatal seam disintegration. We believe that our findings will lead to more effective treatment of facial clefting. PMID:17698055

  20. Robust face alignment under occlusion via regional predictive power estimation.

    PubMed

    Heng Yang; Xuming He; Xuhui Jia; Patras, Ioannis

    2015-08-01

    Face alignment has been well studied in recent years, however, when a face alignment model is applied on facial images with heavy partial occlusion, the performance deteriorates significantly. In this paper, instead of training an occlusion-aware model with visibility annotation, we address this issue via a model adaptation scheme that uses the result of a local regression forest (RF) voting method. In the proposed scheme, the consistency of the votes of the local RF in each of several oversegmented regions is used to determine the reliability of predicting the location of the facial landmarks. The latter is what we call regional predictive power (RPP). Subsequently, we adapt a holistic voting method (cascaded pose regression based on random ferns) by putting weights on the votes of each fern according to the RPP of the regions used in the fern tests. The proposed method shows superior performance over existing face alignment models in the most challenging data sets (COFW and 300-W). Moreover, it can also estimate with high accuracy (72.4% overlap ratio) which image areas belong to the face or nonface objects, on the heavily occluded images of the COFW data set, without explicit occlusion modeling.

  1. Robustness mechanisms in primate societies: a perturbation study

    PubMed Central

    Flack, Jessica C; Krakauer, David C; de Waal, Frans B. M

    2005-01-01

    Conflict management mechanisms have a direct, critical effect on system robustness because they mitigate conflict intensity and help repair damaged relationships. However, robustness mechanisms can also have indirect effects on system integrity by facilitating interactions among components. We explore the indirect role that conflict management mechanisms play in the maintenance of social system robustness, using a perturbation technique to ‘knockout’ components responsible for effective conflict management. We explore the effects of knockout on pigtailed macaque (Macaca nemestrina) social organization, using a captive group of 84 individuals. This system is ideal in addressing this question because there is heterogeneity in performance of conflict management. Consequently, conflict managers can be easily removed without disrupting other control structures. We find that powerful conflict managers are essential in maintaining social order for the benefit of all members of society. We show that knockout of components responsible for conflict management results in system destabilization by significantly increasing mean levels of conflict and aggression, decreasing socio-positive interaction and decreasing the operation of repair mechanisms. PMID:16024369

  2. Reduction of facial wrinkles by hydrolyzed water-soluble egg membrane associated with reduction of free radical stress and support of matrix production by dermal fibroblasts

    PubMed Central

    Jensen, Gitte S; Shah, Bijal; Holtz, Robert; Patel, Ashok; Lo, Donald C

    2016-01-01

    Objective The aim of this study was to evaluate the effects of water-soluble egg membrane (WSEM) on wrinkle reduction in a clinical pilot study and to elucidate specific mechanisms of action using primary human immune and dermal cell-based bioassays. Methods To evaluate the effects of topical application of WSEM (8%) on human skin, an open-label 8-week study was performed involving 20 healthy females between the age of 45 years and 65 years. High-resolution photography and digital analysis were used to evaluate the wrinkle depth in the facial skin areas beside the eye (crow’s feet). WSEM was tested for total antioxidant capacity and effects on the formation of reactive oxygen species by human polymorphonuclear cells. Human keratinocytes (HaCaT cells) were used for quantitative polymerase chain reaction analysis of the antioxidant response element genes Nqo1, Gclm, Gclc, and Hmox1. Evaluation of effects on human primary dermal fibroblasts in vitro included cellular viability and production of the matrix components collagen and elastin. Results Topical use of a WSEM-containing facial cream for 8 weeks resulted in a significant reduction of wrinkle depth (P<0.05). WSEM contained antioxidants and reduced the formation of reactive oxygen species by inflammatory cells in vitro. Despite lack of a quantifiable effect on Nrf2, WSEM induced the gene expression of downstream Nqo1, Gclm, Gclc, and Hmox1 in human keratinocytes. Human dermal fibroblasts treated with WSEM produced more collagen and elastin than untreated cells or cells treated with dbcAMP control. The increase in collagen production was statistically significant (P<0.05). Conclusion The topical use of WSEM on facial skin significantly reduced the wrinkle depth. The underlying mechanisms of this effect may be related to protection from free radical damage at the cellular level and induction of several antioxidant response elements, combined with stimulation of human dermal fibroblasts to secrete high levels of matrix components. PMID:27789968

  3. In vitro evaluation of the marginal integrity of CAD/CAM interim crowns.

    PubMed

    Kelvin Khng, Kwang Yong; Ettinger, Ronald L; Armstrong, Steven R; Lindquist, Terry; Gratton, David G; Qian, Fang

    2016-05-01

    The accuracy of interim crowns made with computer-aided design and computer-aided manufacturing (CAD/CAM) systems has not been well investigated. The purpose of this in vitro study was to evaluate the marginal integrity of interim crowns made by CAD/CAM compared with that of conventional polymethylmethacrylate (PMMA) crowns. A dentoform mandibular left second premolar was prepared for a ceramic crown and scanned for the fabrication of 60 stereolithical resin dies, half of which were scanned to fabricate 15 Telio CAD-CEREC and 15 Paradigm MZ100-E4D-E4D crowns. Fifteen Caulk and 15 Jet interim crowns were made on the remaining resin dies. All crowns were cemented with Tempgrip under a 17.8-N load, thermocycled for 1000 cycles, placed in 0.5% acid fuschin for 24 hours, and embedded in epoxy resin before sectioning from the mid-buccal to mid-lingual surface. The marginal discrepancy was measured using a traveling microscope, and dye penetration was measured as a percentage of the overall length under the crown. The mean vertical marginal discrepancy of the conventionally made interim crowns was greater than for the CAD/CAM crowns (P=.006), while no difference was found for the horizontal component (P=.276). The mean vertical marginal discrepancy at the facial surface of the Caulk crowns was significantly greater than that of the other 3 types of interim crowns (P<.001). At the facial margin, the mean horizontal component of the Telio crowns was significantly larger than that of the other 3 types, with no difference at the lingual margins (P=.150). The mean percentage dye penetration for the Paradigm MZ100-E4D crowns was significantly greater and for Jet crowns significantly smaller than for the other 3 crowns (P<.001). However, the mean percentage dye penetration was significantly correlated with the vertical and horizontal marginal discrepancies of the Jet interim crowns at the facial surface and with the horizontal marginal discrepancies of the Caulk interim crowns at the lingual surface (P<.01 in each instance). A significantly smaller vertical marginal discrepancy was found with the interim crowns fabricated by CAD/CAM as compared with PMMA crowns; however, this difference was not observed for the horizontal component. The percentage dye penetration was correlated with vertical and horizontal discrepancies at the facial surface for the Jet interim crowns and with horizontal discrepancies at the lingual surface for the Caulk interim crowns. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  4. Toward robust estimation of the components of forest population change: simulation results

    Treesearch

    Francis A. Roesch

    2014-01-01

    This report presents the full simulation results of the work described in Roesch (2014), in which multiple levels of simulation were used to test the robustness of estimators for the components of forest change. In that study, a variety of spatial-temporal populations were created based on, but more variable than, an actual forest monitoring dataset, and then those...

  5. Toward Robust Estimation of the Components of Forest Population Change

    Treesearch

    Francis A. Roesch

    2014-01-01

    Multiple levels of simulation are used to test the robustness of estimators of the components of change. I first created a variety of spatial-temporal populations based on, but more variable than, an actual forest monitoring data set and then sampled those populations under a variety of sampling error structures. The performance of each of four estimation approaches is...

  6. An integrated telemedicine platform for the assessment of affective physiological states

    PubMed Central

    Katsis, Christos D; Ganiatsas, George; Fotiadis, Dimitrios I

    2006-01-01

    AUBADE is an integrated platform built for the affective assessment of individuals. The system performs evaluation of the emotional state by classifying vectors of features extracted from: facial Electromyogram, Respiration, Electrodermal Activity and Electrocardiogram. The AUBADE system consists of: (a) a multisensorial wearable, (b) a data acquisition and wireless communication module, (c) a feature extraction module, (d) a 3D facial animation module which is used for the projection of the obtained data through a generic 3D face model; whereas the end-user will be able to view the facial expression of the subject in real time, (e) an intelligent emotion recognition module, and (f) the AUBADE databases where the acquired signals along with the subject's animation videos are saved. The system is designed to be applied to human subjects operating under extreme stress conditions, in particular car racing drivers, and also to patients suffering from neurological and psychological disorders. AUBADE's classification accuracy into five predefined emotional classes (high stress, low stress, disappointment, euphoria and neutral face) is 86.0%. The pilot system applications and components are being tested and evaluated on Maserati's car. racing drivers. PMID:16879757

  7. Glucose transporters GLUT4 and GLUT8 are upregulated after facial nerve axotomy in adult mice.

    PubMed

    Gómez, Olga; Ballester-Lurbe, Begoña; Mesonero, José E; Terrado, José

    2011-10-01

    Peripheral nerve axotomy in adult mice elicits a complex response that includes increased glucose uptake in regenerating nerve cells. This work analyses the expression of the neuronal glucose transporters GLUT3, GLUT4 and GLUT8 in the facial nucleus of adult mice during the first days after facial nerve axotomy. Our results show that whereas GLUT3 levels do not vary, GLUT4 and GLUT8 immunoreactivity increases in the cell body of the injured motoneurons after the lesion. A sharp increase in GLUT4 immunoreactivity was detected 3 days after the nerve injury and levels remained high on Day 8, but to a lesser extent. GLUT8 also increased the levels but later than GLUT4, as they only rose on Day 8 post-lesion. These results indicate that glucose transport is activated in regenerating motoneurons and that GLUT4 plays a main role in this function. These results also suggest that metabolic defects involving impairment of glucose transporters may be principal components of the neurotoxic mechanisms leading to motoneuron death. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society of Great Britain and Ireland.

  8. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  9. Passing faces: sequence-dependent variations in the perceptual processing of emotional faces.

    PubMed

    Karl, Christian; Hewig, Johannes; Osinsky, Roman

    2016-10-01

    There is broad evidence that contextual factors influence the processing of emotional facial expressions. Yet temporal-dynamic aspects, inter alia how face processing is influenced by the specific order of neutral and emotional facial expressions, have been largely neglected. To shed light on this topic, we recorded electroencephalogram from 168 healthy participants while they performed a gender-discrimination task with angry and neutral faces. Our event-related potential (ERP) analyses revealed a strong emotional modulation of the N170 component, indicating that the basic visual encoding and emotional analysis of a facial stimulus happen, at least partially, in parallel. While the N170 and the late positive potential (LPP; 400-600 ms) were only modestly affected by the sequence of preceding faces, we observed a strong influence of face sequences on the early posterior negativity (EPN; 200-300 ms). Finally, the differing response patterns of the EPN and LPP indicate that these two ERPs represent distinct processes during face analysis: while the former seems to represent the integration of contextual information in the perception of a current face, the latter appears to represent the net emotional interpretation of a current face.

  10. Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns

    PubMed Central

    Teng, Dongdong; Chen, Dihu; Tan, Hongzhou

    2015-01-01

    The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929

  11. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  12. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage

    PubMed Central

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. Results: In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. Conclusions: The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh. PMID:26550216

  13. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.

    PubMed

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.

  14. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  15. E-cigarette aerosol exposure can cause craniofacial defects in Xenopus laevis embryos and mammalian neural crest cells

    PubMed Central

    Kennedy, Allyson E.; Kandalam, Suraj; Olivares-Navarrete, Rene

    2017-01-01

    Since electronic cigarette (ECIG) introduction to American markets in 2007, vaping has surged in popularity. Many, including women of reproductive age, also believe that ECIG use is safer than traditional tobacco cigarettes and is not hazardous when pregnant. However, there are few studies investigating the effects of ECIG exposure on the developing embryo and nothing is known about potential effects on craniofacial development. Therefore, we have tested the effects of several aerosolized e-cigarette liquids (e-cigAM) in an in vivo craniofacial model, Xenopus laevis, as well as a mammalian neural crest cell line. Results demonstrate that e-cigAM exposure during embryonic development induces a variety of defects, including median facial clefts and midface hypoplasia in two of e-cigAMs tested e-cigAMs. Detailed quantitative analyses of the facial morphology revealed that nicotine is not the main factor in inducing craniofacial defects, but can exacerbate the effects of the other e-liquid components. Additionally, while two different e-cigAMs can have very similar consequences on facial appearances, there are subtle differences that could be due to the differences in e-cigAM components. Further assessment of embryos exposed to these particular e-cigAMs revealed cranial cartilage and muscle defects and a reduction in the blood supply to the face. Finally, the expression of markers for vascular and cartilage differentiation was reduced in a mammalian neural crest cell line corroborating the in vivo effects. Our work is the first to show that ECIG use could pose a potential hazard to the developing embryo and cause craniofacial birth defects. This emphasizes the need for more testing and regulation of this new popular product. PMID:28957438

  16. E-cigarette aerosol exposure can cause craniofacial defects in Xenopus laevis embryos and mammalian neural crest cells.

    PubMed

    Kennedy, Allyson E; Kandalam, Suraj; Olivares-Navarrete, Rene; Dickinson, Amanda J G

    2017-01-01

    Since electronic cigarette (ECIG) introduction to American markets in 2007, vaping has surged in popularity. Many, including women of reproductive age, also believe that ECIG use is safer than traditional tobacco cigarettes and is not hazardous when pregnant. However, there are few studies investigating the effects of ECIG exposure on the developing embryo and nothing is known about potential effects on craniofacial development. Therefore, we have tested the effects of several aerosolized e-cigarette liquids (e-cigAM) in an in vivo craniofacial model, Xenopus laevis, as well as a mammalian neural crest cell line. Results demonstrate that e-cigAM exposure during embryonic development induces a variety of defects, including median facial clefts and midface hypoplasia in two of e-cigAMs tested e-cigAMs. Detailed quantitative analyses of the facial morphology revealed that nicotine is not the main factor in inducing craniofacial defects, but can exacerbate the effects of the other e-liquid components. Additionally, while two different e-cigAMs can have very similar consequences on facial appearances, there are subtle differences that could be due to the differences in e-cigAM components. Further assessment of embryos exposed to these particular e-cigAMs revealed cranial cartilage and muscle defects and a reduction in the blood supply to the face. Finally, the expression of markers for vascular and cartilage differentiation was reduced in a mammalian neural crest cell line corroborating the in vivo effects. Our work is the first to show that ECIG use could pose a potential hazard to the developing embryo and cause craniofacial birth defects. This emphasizes the need for more testing and regulation of this new popular product.

  17. Discriminant Features and Temporal Structure of Nonmanuals in American Sign Language

    PubMed Central

    Benitez-Quiroz, C. Fabian; Gökgöz, Kadir; Wilbur, Ronnie B.; Martinez, Aleix M.

    2014-01-01

    To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic–computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences – Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions – plus their polarities – positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches. PMID:24516528

  18. Preventing trachoma through environmental sanitation: a review of the evidence base.

    PubMed Central

    Prüss, A.; Mariotti, S. P.

    2000-01-01

    A review of the available evidence for the associations between environmental sanitation and transmission of trachoma was undertaken with a view to identifying preventive interventions. The WHO Global Alliance for the Elimination of Trachoma by the Year 2020 (GET2020) has adopted the "SAFE" strategy, consisting of four components: Surgery, Antibiotic treatment, promotion of Facial cleanliness and initiation of Environmental changes. This review of 19 studies selected from the 39 conducted in different parts of the world shows that there is clear evidence to support the recommendation of facial cleanliness and environmental improvements (i.e. the F and E components of the SAFE strategy) to prevent trachoma. Person-to-person contact and flies appear to constitute the major transmission pathways. Improvement of personal and community hygiene has great potential for a sustainable reduction in trachoma transmission. Controlled clinical trials are needed to estimate the relative contribution of various elements to the risk of transmission of trachoma and the effectiveness of different interventions. These could show the relative attributable risks and effectiveness of interventions to achieve improvement of personal hygiene and fly control by environmental improvements, alone or in combination, and with or without antibiotic treatment. PMID:10743299

  19. The role of working memory in decoding emotions.

    PubMed

    Phillips, Louise H; Channon, Shelley; Tunstall, Mary; Hedenstrom, Anna; Lyons, Kathryn

    2008-04-01

    Decoding facial expressions of emotion is an important aspect of social communication that is often impaired following psychiatric or neurological illness. However, little is known of the cognitive components involved in perceiving emotional expressions. Three dual task studies explored the role of verbal working memory in decoding emotions. Concurrent working memory load substantially interfered with choosing which emotional label described a facial expression (Experiment 1). A key factor in the magnitude of interference was the number of emotion labels from which to choose (Experiment 2). In contrast the ability to decide that two faces represented the same emotion in a discrimination task was relatively unaffected by concurrent working memory load (Experiment 3). Different methods of assessing emotion perception make substantially different demands on working memory. Implications for clinical disorders which affect both working memory and emotion perception are considered. (Copyright) 2008 APA.

  20. Developmental and Individual Differences in the Neural Processing of Dynamic Expressions of Pain and Anger

    PubMed Central

    Missana, Manuela; Grigutsch, Maren; Grossmann, Tobias

    2014-01-01

    We examined the processing of facial expressions of pain and anger in 8-month-old infants and adults by measuring event-related brain potentials (ERPs) and frontal EEG alpha asymmetry. The ERP results revealed that while adults showed a late positive potential (LPP) to emotional expressions that was enhanced to pain expressions, reflecting increased evaluation and emotional arousal to pain expressions, infants showed a negative component (Nc) to emotional expressions that was enhanced to angry expressions, reflecting increased allocation of attention to angry faces. Moreover, infants and adults showed opposite patterns in their frontal asymmetry responses to pain and anger, suggesting developmental differences in the motivational processes engendered by these facial expressions. These findings are discussed in the light of associated individual differences in infant temperament and adult dispositional empathy. PMID:24705497

  1. Robust Spacecraft Component Detection in Point Clouds.

    PubMed

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  2. Robust Spacecraft Component Detection in Point Clouds

    PubMed Central

    Wei, Quanmao; Jiang, Zhiguo

    2018-01-01

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828

  3. Facial Scar Revision: Understanding Facial Scar Treatment

    MedlinePlus

    ... Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment ... face like the eyes or lips. A facial plastic surgeon has many options for treating and improving ...

  4. Prediction of Mortality Based on Facial Characteristics

    PubMed Central

    Delorme, Arnaud; Pierce, Alan; Michel, Leena; Radin, Dean

    2016-01-01

    Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief examination of facial photographs. All photos used in the experiment were transformed into a uniform gray scale and then counterbalanced across eight categories: gender, age, gaze direction, glasses, head position, smile, hair color, and image resolution. Participants examined 404 photographs displayed on a computer monitor, one photo at a time, each shown for a maximum of 8 s. Half of the individuals in the photos were deceased, and half were alive at the time the experiment was conducted. Participants were asked to press a button if they thought the person in a photo was living or deceased. Overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p < 0.004, two-tail). Statistically significant accuracy was independently obtained in 5 of the 12 participants. We also collected 32-channel electrophysiological recordings and observed a robust difference between images of deceased individuals correctly vs. incorrectly classified in the early event related potential (ERP) at 100 ms post-stimulus onset. Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance warrants further investigation. PMID:27242466

  5. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  6. [Effects of a Facial Muscle Exercise Program including Facial Massage for Patients with Facial Palsy].

    PubMed

    Choi, Hyoung Ju; Shin, Sung Hee

    2016-08-01

    The purpose of this study was to examine the effects of a facial muscle exercise program including facial massage on the facial muscle function, subjective symptoms related to paralysis and depression in patients with facial palsy. This study was a quasi-experimental research with a non-equivalent control group non-synchronized design. Participants were 70 patients with facial palsy (experimental group 35, control group 35). For the experimental group, the facial muscular exercise program including facial massage was performed 20 minutes a day, 3 times a week for two weeks. Data were analyzed using descriptive statistics, χ²-test, Fisher's exact test and independent sample t-test with the SPSS 18.0 program. Facial muscular function of the experimental group improved significantly compared to the control group. There was no significant difference in symptoms related to paralysis between the experimental group and control group. The level of depression in the experimental group was significantly lower than the control group. Results suggest that a facial muscle exercise program including facial massage is an effective nursing intervention to improve facial muscle function and decrease depression in patients with facial palsy.

  7. Facial neuropathy with imaging enhancement of the facial nerve: a case report

    PubMed Central

    Mumtaz, Sehreen; Jensen, Matthew B

    2014-01-01

    A young women developed unilateral facial neuropathy 2 weeks after a motor vehicle collision involving fractures of the skull and mandible. MRI showed contrast enhancement of the facial nerve. We review the literature describing facial neuropathy after trauma and facial nerve enhancement patterns with different causes of facial neuropathy. PMID:25574155

  8. Clindamycin phosphate-tretinoin combination gel revisited: status report on a specific formulation used for acne treatment.

    PubMed

    Del Rosso, James Q

    2017-03-01

    Topical agents, including retinoids and antibiotics, are commonly used to treat acne vulgaris (AV) and remain as components of acne treatment guidelines. Approved topical combination formulations offer the advantages of established efficacy, decreased frequency of application, and improved convenience for patients. This article discusses both clindamycin phosphate (CP) and tretinoin (Tret) as components of a topical aqueous-based combination gel that has been shown to be effective, safe, and well tolerated for treatment of facial AV. Clinically relevant considerations with use of this treatment are also discussed, including therapeutic advantages and potential limitations.

  9. Competition between Jagged-Notch and Endothelin1 Signaling Selectively Restricts Cartilage Formation in the Zebrafish Upper Face

    PubMed Central

    Barske, Lindsey; Askary, Amjad; Zuniga, Elizabeth; Balczerski, Bartosz; Bump, Paul; Nichols, James T.; Crump, J. Gage

    2016-01-01

    The intricate shaping of the facial skeleton is essential for function of the vertebrate jaw and middle ear. While much has been learned about the signaling pathways and transcription factors that control facial patterning, the downstream cellular mechanisms dictating skeletal shapes have remained unclear. Here we present genetic evidence in zebrafish that three major signaling pathways − Jagged-Notch, Endothelin1 (Edn1), and Bmp − regulate the pattern of facial cartilage and bone formation by controlling the timing of cartilage differentiation along the dorsoventral axis of the pharyngeal arches. A genomic analysis of purified facial skeletal precursors in mutant and overexpression embryos revealed a core set of differentiation genes that were commonly repressed by Jagged-Notch and induced by Edn1. Further analysis of the pre-cartilage condensation gene barx1, as well as in vivo imaging of cartilage differentiation, revealed that cartilage forms first in regions of high Edn1 and low Jagged-Notch activity. Consistent with a role of Jagged-Notch signaling in restricting cartilage differentiation, loss of Notch pathway components resulted in expanded barx1 expression in the dorsal arches, with mutation of barx1 rescuing some aspects of dorsal skeletal patterning in jag1b mutants. We also identified prrx1a and prrx1b as negative Edn1 and positive Bmp targets that function in parallel to Jagged-Notch signaling to restrict the formation of dorsal barx1+ pre-cartilage condensations. Simultaneous loss of jag1b and prrx1a/b better rescued lower facial defects of edn1 mutants than loss of either pathway alone, showing that combined overactivation of Jagged-Notch and Bmp/Prrx1 pathways contribute to the absence of cartilage differentiation in the edn1 mutant lower face. These findings support a model in which Notch-mediated restriction of cartilage differentiation, particularly in the second pharyngeal arch, helps to establish a distinct skeletal pattern in the upper face. PMID:27058748

  10. A new quantitative evaluation method for age-related changes of individual pigmented spots in facial skin.

    PubMed

    Kikuchi, K; Masuda, Y; Yamashita, T; Sato, K; Katagiri, C; Hirao, T; Mizokami, Y; Yaguchi, H

    2016-08-01

    Facial skin pigmentation is one of the most prominent visible features of skin aging and often affects perception of health and beauty. To date, facial pigmentation has been evaluated using various image analysis methods developed for the cosmetic and esthetic fields. However, existing methods cannot provide precise information on pigmented spots, such as variations in size, color shade, and distribution pattern. The purpose of this study is the development of image evaluation methods to analyze individual pigmented spots and acquire detailed information on their age-related changes. To characterize the individual pigmented spots within a cheek image, we established a simple object-counting algorithm. First, we captured cheek images using an original imaging system equipped with an illumination unit and a high-resolution digital camera. The acquired images were converted into melanin concentration images using compensation formulae. Next, the melanin images were converted into binary images. The binary images were then subjected to noise reduction. Finally, we calculated parameters such as the melanin concentration, quantity, and size of individual pigmented spots using a connected-components labeling algorithm, which assigns a unique label to each separate group of connected pixels. The cheek image analysis was evaluated on 643 female Japanese subjects. We confirmed that the proposed method was sufficiently sensitive to measure the melanin concentration, and the numbers and sizes of individual pigmented spots through manual evaluation of the cheek images. The image analysis results for the 643 Japanese women indicated clear relationships between age and the changes in the pigmented spots. We developed a new quantitative evaluation method for individual pigmented spots in facial skin. This method facilitates the analysis of the characteristics of various pigmented facial spots and is directly applicable to the fields of dermatology, pharmacology, and esthetic cosmetology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  12. Adaptation to Emotional Conflict: Evidence from a Novel Face Emotion Paradigm

    PubMed Central

    Clayson, Peter E.; Larson, Michael J.

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words. PMID:24073278

  13. Adaptation to emotional conflict: evidence from a novel face emotion paradigm.

    PubMed

    Clayson, Peter E; Larson, Michael J

    2013-01-01

    The preponderance of research on trial-by-trial recruitment of affective control (e.g., conflict adaptation) relies on stimuli wherein lexical word information conflicts with facial affective stimulus properties (e.g., the face-Stroop paradigm where an emotional word is overlaid on a facial expression). Several studies, however, indicate different neural time course and properties for processing of affective lexical stimuli versus affective facial stimuli. The current investigation used a novel task to examine control processes implemented following conflicting emotional stimuli with conflict-inducing affective face stimuli in the absence of affective words. Forty-one individuals completed a task wherein the affective-valence of the eyes and mouth were either congruent (happy eyes, happy mouth) or incongruent (happy eyes, angry mouth) while high-density event-related potentials (ERPs) were recorded. There was a significant congruency effect and significant conflict adaptation effects for error rates. Although response times (RTs) showed a significant congruency effect, the effect of previous-trial congruency on current-trial RTs was only present for current congruent trials. Temporospatial principal components analysis showed a P3-like ERP source localized using FieldTrip software to the medial cingulate gyrus that was smaller on incongruent than congruent trials and was significantly influenced by the recruitment of control processes following previous-trial emotional conflict (i.e., there was significant conflict adaptation in the ERPs). Results show that a face-only paradigm may be sufficient to elicit emotional conflict and suggest a system for rapidly detecting conflicting emotional stimuli and subsequently adjusting control resources, similar to cognitive conflict detection processes, when using conflicting facial expressions without words.

  14. Folliculotropism in pigmented facial macules: Differential diagnosis with reflectance confocal microscopy.

    PubMed

    Persechino, Flavia; De Carvalho, Nathalie; Ciardo, Silvana; De Pace, Barbara; Casari, Alice; Chester, Johanna; Kaleci, Shaniko; Stanganelli, Ignazio; Longo, Caterina; Farnetani, Francesca; Pellacani, Giovanni

    2018-03-01

    Pigmented facial macules are common on sun damage skin. The diagnosis of early stage lentigo maligna (LM) and lentigo maligna melanoma (LMM) is challenging. Reflectance confocal microscopy (RCM) has been proven to increase diagnostic accuracy of facial lesions. A total of 154 pigmented facial macules, retrospectively collected, were evaluated for the presence of already-described RCM features and new parameters depicting aspects of the follicle. Melanocytic nests, roundish pagetoid cells, follicular infiltration, bulgings from the follicles and many bright dendrites and infiltration of the hair follicle (ie, folliculotropism) were found to be indicative of LM/LMM compared to non-melanocytic skin neoplasms (NMSNs), with an overall sensitivity of 96% and specificity of 83%. Concerning NMSNs, solar lentigo and lichen planus-like keratosis resulted better distinguishable from LM/LMM because usually lacking malignant features and presenting characteristic diagnostic parameters, such as epidermal cobblestone pattern and polycyclic papillary contours. On the other hand, distinction of pigmented actinic keratosis (PAK) resulted more difficult, and needing evaluation of hair follicle infiltration and bulging structures, due to the frequent observation of few bright dendrites in the epidermis, but predominantly not infiltrating the hair follicle (estimated specificity for PAK 53%). A detailed evaluation of the components of the folliculotropism may help to improve the diagnostic accuracy. The classification of the type, distribution and amount of cells, and the presence of bulging around the follicles seem to represent important tools for the differentiation between PAK and LM/LMM at RCM analysis. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Clinical evidence on the efficacy and safety of an antioxidant optimized 1.5% salicylic acid (SA) cream in the treatment of facial acne: an open, baseline-controlled clinical study.

    PubMed

    Zheng, Yue; Wan, Miaojian; Chen, Haiyan; Ye, Congxiu; Zhao, Yue; Yi, Jinling; Xia, Yue; Lai, Wei

    2013-05-01

    Acne pathogenesis is multifactorial and includes inflammation. Combining active ingredients targeting multiple components of acne pathogenesis may yield optimal outcomes. This study investigates the safety and efficacy of an antioxidant optimized topical salicylic acid (SA) 1.5% cream containing natural skin penetration enhancers in combination with antioxidant activity for treatment of facial acne. A total of 20 patients with facial acne, aged 19-32 years (2 males, 18 females; mean age 26.1 ± 3.2), were enrolled. Patients were treated with topical 1.5% SA cream and instructed to apply the cream as a thin film over the affected area twice daily (in the morning and evening) for 4 weeks. Inflammatory severity, numbers of papules and pustules were evaluated by investigators at day 0 and weekly, and patients ranked their improvement. In all, 95% of patients improved: 20% had complete clearing, 30% had significantly improved, 15% had moderate improvement, 30% had mild improved, and there was no response in 5% of the patients by 4 weeks of treatment. No side effects were observed. This study demonstrates the efficacy and safety of this optimized topical 1.5% SA cream containing natural skin penetration enhancers in combination with antioxidant activity when applied twice daily for the reduction of facial acne; in particular, it is most effective for mild-to-moderate acne. © 2013 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.

  16. Angular photogrammetric analysis of the soft-tissue facial profile of Indian adults.

    PubMed

    Pandian, K Saravana; Krishnan, Sindhuja; Kumar, S Aravind

    2018-01-01

    Soft-tissue analysis has become an important component of orthodontic diagnosis and treatment planning. Photographic evaluation of an orthodontic patient is a very close representation of the appearance of the person. The previously established norms for soft-tissue analysis will vary for different ethnic groups. Thus, there is a need to develop soft-tissue facial profile norms pertaining to Indian ethnic groups. The aim of this study is to establish the angular photogrammetric standards of soft-tissue facial profile for Indian males and females and also to compare sexual dimorphism present between them. The lateral profile photographs of 300 random participants (150 males and 150 females) between ages 18 and 25 years were taken and analyzed using FACAD tracing software. Inclusion criteria were angles Class I molar occlusion with acceptable crowding and proclination, normal growth and development with well-aligned dental arches, and full complements of permanent teeth irrespective of third molar status. This study was conducted in Indian population, and samples were taken from various cities across India. Descriptive statistical analysis was carried out, and sexual dimorphism was evaluated by Student's t-test between males and females. The results of the present study showed statistically significant (P < 0.05) gender difference in 5 parameters out of 12 parameters in Indian population. In the present study, soft-tissue facial measurements were established by means of photogrammetric analysis to facilitate orthodontists to carry out more quantitative evaluation and make disciplined decisions. The mean values obtained can be used for comparison with records of participants with the same characteristics by following this photogrammetric technique.

  17. Developmental and Evolutionary Significance of the Zygomatic Bone

    PubMed Central

    Heuzé, Yann; Kawasaki, Kazuhiko; Schwarz, Tobias; Schoenebeck, Jeffrey J.

    2016-01-01

    ABSTRACT The zygomatic bone is derived evolutionarily from the orbital series. In most modern mammals the zygomatic bone forms a large part of the face and usually serves as a bridge that connects the facial skeleton to the neurocranium. Our aim is to provide information on the contribution of the zygomatic bone to variation in midfacial protrusion using three samples; humans, domesticated dogs, and monkeys. In each case, variation in midface protrusion is a heritable trait produced by one of three classes of transmission: localized dysmorphology associated with single gene dysfunction, selective breeding, or long‐term evolution from a common ancestor. We hypothesize that the shape of the zygomatic bone reflects its role in stabilizing the connection between facial skeleton and neurocranium and consequently, changes in facial protrusion are more strongly reflected by the maxilla and premaxilla. Our geometric morphometric analyses support our hypothesis suggesting that the shape of the zygomatic bone has less to do with facial protrusion. By morphometrically dissecting the zygomatic bone we have determined a degree of modularity among parts of the midfacial skeleton suggesting that these components have the ability to vary independently and thus can evolve differentially. From these purely morphometric data, we propose that the neural crest cells that are fated to contribute to the zygomatic bone experience developmental cues that distinguish them from the maxilla and premaxilla. The spatiotemporal and molecular identity of the cues that impart zygoma progenitors with their identity remains an open question that will require alternative data sets. Anat Rec, 299:1616–1630, 2016. © 2016 The Authors The Anatomical Record Published by Wiley Periodicals, Inc. PMID:27870340

  18. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    PubMed

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  19. Outcome of different facial nerve reconstruction techniques.

    PubMed

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Dynamic biometric identification from multiple views using the GLBP-TOP method.

    PubMed

    Wang, Yu; Shen, Xuanjing; Chen, Haipeng; Zhai, Yujie

    2014-01-01

    To realize effective and rapid dynamic biometric identification with low computational complexity, a video-based facial texture program that extracts local binary patterns from three orthogonal planes in the frequency domain of the Gabor transform (GLBP-TOP) was proposed. Firstly, each normalized face was transformed by Gabor wavelet to get the enhanced Gabor magnitude map, and then the LBP-TOP operator was applied to the maps to extract video texture. Finally, weighted Chi square statistics based on the Fisher Criterion were used to realize the identification. The proposed algorithm was proved effective through the biometric experiments using the Honda/UCSD database, and was robust against changes of illumination and expressions.

  1. [INVITED] Non-intrusive optical imaging of face to probe physiological traits in Autism Spectrum Disorder

    NASA Astrophysics Data System (ADS)

    Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.

    2016-03-01

    Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.

  2. Source apportionment of soil heavy metals using robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR) receptor model.

    PubMed

    Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun

    2018-06-01

    The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Scripts or Components? A Comparative Study of Basic Emotion Knowledge in Roma and Non-Roma Children

    ERIC Educational Resources Information Center

    Giménez-Dasí, Marta; Quintanilla, Laura; Lucas-Molina, Beatriz

    2018-01-01

    The basic aspects of emotional comprehension seem to be acquired around the age of 5. However, it is not clear whether children's emotion knowledge is based on facial expression, organized in scripts, or determined by sociocultural context. This study aims to shed some light on these subjects by assessing knowledge of basic emotions in 4- and…

  4. Knife blade as a facial foreign body.

    PubMed

    Gardner, P A; Righi, P; Shahbahrami, P B

    1997-08-01

    This case demonstrates the unpredictability of foreign bodies in the face. The retained knife blade eluded detection on two separate examinations. The essential components to making a correct diagnosis of a foreign body following a stabbing to the face include a thorough review of the mechanism of injury, a complete head and neck examination, a high index of suspicion, and plain radiographs of the face.

  5. Preoperative Identification of Facial Nerve in Vestibular Schwannomas Surgery Using Diffusion Tensor Tractography

    PubMed Central

    Choi, Kyung-Sik; Kim, Min-Su; Kwon, Hyeok-Gyu; Jang, Sung-Ho

    2014-01-01

    Objective Facial nerve palsy is a common complication of treatment for vestibular schwannoma (VS), so preserving facial nerve function is important. The preoperative visualization of the course of facial nerve in relation to VS could help prevent injury to the nerve during the surgery. In this study, we evaluate the accuracy of diffusion tensor tractography (DTT) for preoperative identification of facial nerve. Methods We prospectively collected data from 11 patients with VS, who underwent preoperative DTT for facial nerve. Imaging results were correlated with intraoperative findings. Postoperative DTT was performed at postoperative 3 month. Facial nerve function was clinically evaluated according to the House-Brackmann (HB) facial nerve grading system. Results Facial nerve courses on preoperative tractography were entirely correlated with intraoperative findings in all patients. Facial nerve was located on the anterior of the tumor surface in 5 cases, on anteroinferior in 3 cases, on anterosuperior in 2 cases, and on posteroinferior in 1 case. In postoperative facial nerve tractography, preservation of facial nerve was confirmed in all patients. No patient had severe facial paralysis at postoperative one year. Conclusion This study shows that DTT for preoperative identification of facial nerve in VS surgery could be a very accurate and useful radiological method and could help to improve facial nerve preservation. PMID:25289119

  6. Facial animation on an anatomy-based hierarchical face model

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  7. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    PubMed Central

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  8. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates

    PubMed Central

    Santana, Sharlene E.; Dobson, Seth D.; Diogo, Rui

    2014-01-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. PMID:24850898

  9. Evaluation of the robustness of estimating five components from a skin spectral image

    NASA Astrophysics Data System (ADS)

    Akaho, Rina; Hirose, Misa; Tsumura, Norimichi

    2018-04-01

    We evaluated the robustness of a method used to estimate five components (i.e., melanin, oxy-hemoglobin, deoxy-hemoglobin, shading, and surface reflectance) from the spectral reflectance of skin at five wavelengths against noise and a change in epidermis thickness. We also estimated the five components from recorded images of age spots and circles under the eyes using the method. We found that noise in the image must be no more 0.1% to accurately estimate the five components and that the thickness of the epidermis affects the estimation. We acquired the distribution of major causes for age spots and circles under the eyes by applying the method to recorded spectral images.

  10. Stability of a giant connected component in a complex network

    NASA Astrophysics Data System (ADS)

    Kitsak, Maksim; Ganin, Alexander A.; Eisenberg, Daniel A.; Krapivsky, Pavel L.; Krioukov, Dmitri; Alderson, David L.; Linkov, Igor

    2018-01-01

    We analyze the stability of the network's giant connected component under impact of adverse events, which we model through the link percolation. Specifically, we quantify the extent to which the largest connected component of a network consists of the same nodes, regardless of the specific set of deactivated links. Our results are intuitive in the case of single-layered systems: the presence of large degree nodes in a single-layered network ensures both its robustness and stability. In contrast, we find that interdependent networks that are robust to adverse events have unstable connected components. Our results bring novel insights to the design of resilient network topologies and the reinforcement of existing networked systems.

  11. Facial dynamics and emotional expressions in facial aging treatments.

    PubMed

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  12. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  13. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    PubMed Central

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  14. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    PubMed

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  15. Preservation of Facial Nerve Function Repaired by Using Fibrin Glue-Coated Collagen Fleece for a Totally Transected Facial Nerve during Vestibular Schwannoma Surgery

    PubMed Central

    Choi, Kyung-Sik; Kim, Min-Su; Jang, Sung-Ho

    2014-01-01

    Recently, the increasing rates of facial nerve preservation after vestibular schwannoma (VS) surgery have been achieved. However, the management of a partially or completely damaged facial nerve remains an important issue. The authors report a patient who was had a good recovery after a facial nerve reconstruction using fibrin glue-coated collagen fleece for a totally transected facial nerve during VS surgery. And, we verifed the anatomical preservation and functional outcome of the facial nerve with postoperative diffusion tensor (DT) imaging facial nerve tractography, electroneurography (ENoG) and House-Brackmann (HB) grade. DT imaging tractography at the 3rd postoperative day revealed preservation of facial nerve. And facial nerve degeneration ratio was 94.1% at 7th postoperative day ENoG. At postoperative 3 months and 1 year follow-up examination with DT imaging facial nerve tractography and ENoG, good results for facial nerve function were observed. PMID:25024825

  16. Pattern of facial palsy in a typical Nigerian specialist hospital.

    PubMed

    Lamina, S; Hanif, S

    2012-12-01

    Data on incidence of facial palsy is generally lacking in Nigeria. To assess six years' incidence of facial palsy in Murtala Muhammed Specialist Hospital (MMSH), Kano, Nigeria. The records of patients diagnosed as facial problems between January 2000 and December 2005 were scrutinized. Data on diagnosis, age, sex, side affected, occupation and causes were obtained. A total number of 698 patients with facial problems were recorded. Five hundred and ninety four (85%) were diagnosed as facial palsy. Out of the diagnosed facial palsy, males (56.2%) had a higher incidence than females; 20-34 years age group (40.3%) had a greater prevalence; the commonest cause of facial palsy was found out to be Idiopathic (39.1%) and was most common among business men (31.6%). Right sided facial palsy (52.2%) was predominant. Incidence of facial palsy was highest in 2003 (25.3%) and decreased from 2004. It was concluded that the incidence of facial palsy was high and Bell's palsy remains the most common causes of facial (nerve) paralysis.

  17. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    ERIC Educational Resources Information Center

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  18. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    NASA Astrophysics Data System (ADS)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  19. Using dynamic mode decomposition for real-time background/foreground separation in video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven

    The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.

  20. Genetic Factors That Increase Male Facial Masculinity Decrease Facial Attractiveness of Female Relatives

    PubMed Central

    Lee, Anthony J.; Mitchem, Dorian G.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.

    2014-01-01

    For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework. PMID:24379153

  1. Genetic factors that increase male facial masculinity decrease facial attractiveness of female relatives.

    PubMed

    Lee, Anthony J; Mitchem, Dorian G; Wright, Margaret J; Martin, Nicholas G; Keller, Matthew C; Zietsch, Brendan P

    2014-02-01

    For women, choosing a facially masculine man as a mate is thought to confer genetic benefits to offspring. Crucial assumptions of this hypothesis have not been adequately tested. It has been assumed that variation in facial masculinity is due to genetic variation and that genetic factors that increase male facial masculinity do not increase facial masculinity in female relatives. We objectively quantified the facial masculinity in photos of identical (n = 411) and nonidentical (n = 782) twins and their siblings (n = 106). Using biometrical modeling, we found that much of the variation in male and female facial masculinity is genetic. However, we also found that masculinity of male faces is unrelated to their attractiveness and that facially masculine men tend to have facially masculine, less-attractive sisters. These findings challenge the idea that facially masculine men provide net genetic benefits to offspring and call into question this popular theoretical framework.

  2. Recognition of facial, auditory, and bodily emotions in older adults.

    PubMed

    Ruffman, Ted; Halberstadt, Jamin; Murray, Janice

    2009-11-01

    Understanding older adults' social functioning difficulties requires insight into their recognition of emotion processing in voices and bodies, not just faces, the focus of most prior research. We examined 60 young and 61 older adults' recognition of basic emotions in facial, vocal, and bodily expressions, and when matching faces and bodies to voices, using 120 emotion items. Older adults were worse than young adults in 17 of 30 comparisons, with consistent difficulties in recognizing both positive (happy) and negative (angry and sad) vocal and bodily expressions. Nearly three quarters of older adults functioned at a level similar to the lowest one fourth of young adults, suggesting that age-related changes are common. In addition, we found that older adults' difficulty in matching emotions was not explained by difficulty on the component sources (i.e., faces or voices on their own), suggesting an additional problem of integration.

  3. Face biometrics with renewable templates

    NASA Astrophysics Data System (ADS)

    van der Veen, Michiel; Kevenaar, Tom; Schrijen, Geert-Jan; Akkermans, Ton H.; Zuo, Fei

    2006-02-01

    In recent literature, privacy protection technologies for biometric templates were proposed. Among these is the so-called helper-data system (HDS) based on reliable component selection. In this paper we integrate this approach with face biometrics such that we achieve a system in which the templates are privacy protected, and multiple templates can be derived from the same facial image for the purpose of template renewability. Extracting binary feature vectors forms an essential step in this process. Using the FERET and Caltech databases, we show that this quantization step does not significantly degrade the classification performance compared to, for example, traditional correlation-based classifiers. The binary feature vectors are integrated in the HDS leading to a privacy protected facial recognition algorithm with acceptable FAR and FRR, provided that the intra-class variation is sufficiently small. This suggests that a controlled enrollment procedure with a sufficient number of enrollment measurements is required.

  4. Dielectric elastomer actuators for facial expression

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhe; Zhu, Jian

    2016-04-01

    Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.

  5. Antimicrobial Polymers Prepared by ROMP with Unprecedented Selectivity: A Molecular Construction Kit Approach

    PubMed Central

    Lienkamp, Karen; Madkour, Ahmad E.; Musante, Ashlan; Nelson, Christopher F.; Nüsslein, Klaus

    2014-01-01

    Synthetic Mimics of Antimicrobial Peptides (SMAMPs) imitate natural host-defense peptides, a vital component of the body’s immune system. This work presents a molecular construction kit that allows the easy and versatile synthesis of a broad variety of facially amphiphilic oxanorbornene-derived monomers. Their ring-opening metathesis polymerization (ROMP) and deprotection provide several series of SMAMPs. Using amphiphilicity, monomer feed ratio, and molecular weight as parameters, polymers with 533 times higher selectivitiy (selecitviy = hemolytic concentration/minimum inhibitory concentration) for bacteria over mammalian cells were discovered. Some of these polymers were 50 times more selective for Gram-positive over Gram-negative bacteria while other polymers surprisingly showed the opposite preference. This kind of “double selectivity” (bacteria over mammalian and one bacterial type over another) is unprecedented in other polymer systems and is attributed to the monomer’s facial amphiphilicity. PMID:18593128

  6. Orientation-sensitivity to facial features explains the Thatcher illusion.

    PubMed

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-10-09

    The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face. © 2014 ARVO.

  7. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.

  8. Robust head pose estimation via supervised manifold learning.

    PubMed

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A Multimodal Emotion Detection System during Human-Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  10. Reaction time, processing speed and sustained attention in schizophrenia: impact on social functioning.

    PubMed

    Lahera, Guillermo; Ruiz, Alicia; Brañas, Antía; Vicens, María; Orozco, Arantxa

    Previous studies have linked processing speed with social cognition and functioning of patients with schizophrenia. A discriminant analysis is needed to determine the different components of this neuropsychological construct. This paper analyzes the impact of processing speed, reaction time and sustained attention on social functioning. 98 outpatients between 18 and 65 with DSM-5 diagnosis of schizophrenia, with a period of 3 months of clinical stability, were recruited. Sociodemographic and clinical data were collected, and the following variables were measured: processing speed (Trail Making Test [TMT], symbol coding [BACS], verbal fluency), simple and elective reaction time, sustained attention, recognition of facial emotions and global functioning. Processing speed (measured only through the BACS), sustained attention (CPT) and elective reaction time (but not simple) were associated with functioning. Recognizing facial emotions (FEIT) correlated significantly with scores on measures of processing speed (BACS, Animals, TMT), sustained attention (CPT) and reaction time. The linear regression model showed a significant relationship between functioning, emotion recognition (P=.015) and processing speed (P=.029). A deficit in processing speed and facial emotion recognition are associated with worse global functioning in patients with schizophrenia. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Cortical activation deficits during facial emotion processing in youth at high risk for the development of substance use disorders.

    PubMed

    Hulvershorn, Leslie A; Finn, Peter; Hummer, Tom A; Leibenluft, Ellen; Ball, Brandon; Gichina, Victoria; Anand, Amit

    2013-08-01

    Recent longitudinal studies demonstrate that addiction risk may be influenced by a cognitive, affective and behavioral phenotype that emerges during childhood. Relatively little research has focused on the affective or emotional risk components of this high-risk phenotype, including the relevant neurobiology. Non-substance abusing youth (N=19; mean age=12.2) with externalizing psychopathology and paternal history of a substance use disorder and demographically matched healthy comparisons (N=18; mean age=11.9) were tested on a facial emotion matching task during functional MRI. This task involved matching faces by emotions (angry, anxious) or matching shape orientation. High-risk youth exhibited increased medial prefrontal, precuneus and occipital cortex activation compared to the healthy comparison group during the face matching condition, relative to the control shape condition. The occipital activation correlated positively with parent-rated emotion regulation impairments in the high-risk group. These findings suggest a preexisting abnormality in cortical activation in response to facial emotion matching in youth at high risk for the development of problem drug or alcohol use. These cortical deficits may underlie impaired affective processing and regulation, which in turn may contribute to escalating drug use in adolescence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Cortical Activation Deficits During Facial Emotion Processing in Youth at High Risk for the Development of Substance Use Disorders*

    PubMed Central

    Hulvershorn, Leslie A.; Finn, Peter; Hummer, Tom A.; Leibenluft, Ellen; Ball, Brandon; Gichina, Victoria; Anand, Amit

    2013-01-01

    Background Recent longitudinal studies demonstrate that addiction risk may be influenced by a cognitive, affective and behavioral phenotype that emerges during childhood. Relatively little research has focused on the affective or emotional risk components of this high-risk phenotype, including the relevant neurobiology. Methods Non-substance abusing youth (N = 19; mean age = 12.2) with externalizing psychopathology and paternal history of a substance use disorder and demographically matched healthy comparisons (N=18; mean age = 11.9) were tested on a facial emotion matching task during functional MRI. This task involved matching faces by emotions (angry, anxious) or matching shape orientation. Results High-risk youth exhibited increased medial prefrontal, precuneus and occipital cortex activation compared to the healthy comparison group during the face matching condition, relative to the control shape condition. The occipital activation correlated positively with parent-rated emotion regulation impairments in the high-risk group. Conclusions These findings suggest a preexisting abnormality in cortical activation in response to facial emotion matching in youth at high risk for the development of problem drug or alcohol use. These cortical deficits may underlie impaired affective processing and regulation, which in turn may contribute to escalating drug use in adolescence. PMID:23768841

  13. Self-concept and the perception of facial appearance in children and adolescents seeking orthodontic treatment.

    PubMed

    Phillips, Ceib; Beal, Kimberly N Edwards

    2009-01-01

    To examine, in adolescents with mild to moderate malocclusion, the relationship between self-concept and demographic characteristics, a clinical assessment of malocclusion, self-perception of malocclusion, and self-perception of facial attractiveness. Fifty-nine consecutive patients ages 9 to 15 years scheduled for initial records in a graduate orthodontic clinic consented to participate. Each subject independently completed the Multidimensional Self-Concept Scale (MSCS), the Facial Image Scale, and the Index of Treatment Need-Aesthetic Component (IOTN-AC). Peer Assessment Rating (PAR) scores were obtained from the patients' diagnostic dental casts. Forward multiple-regression analysis with a backward overlook was used to analyze the effect of the demographic, clinical, and self-perception measures on each of the six self-concept (MSCS) domains. Self-perception of the dentofacial region was the only statistically significant predictor (P < .05) for the Global, Competence, Affect, Academic, and Physical domains of self-concept, while age, parental marital status, and the adolescent's self-perception of the dentofacial region were statistically significant predictors (P < .05) of Social Self-Concept. The self-perceived level of the attractiveness or "positive" feelings toward the dentofacial region is more strongly related to self-concept than the severity of the malocclusion as indicated by the PAR score or by the adolescent's perception of their malocclusion.

  14. Application of a Novel Semi-Automatic Technique for Determining the Bilateral Symmetry Plane of the Facial Skeleton of Normal Adult Males.

    PubMed

    Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang

    2015-09-01

    The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.

  15. The Facial Platysma and Its Underappreciated Role in Lower Face Dynamics and Contour.

    PubMed

    de Almeida, Ada R T; Romiti, Alessandra; Carruthers, Jean D A

    2017-08-01

    The platysma is a superficial muscle involved in important features of the aging neck. Vertical bands, horizontal lines, and loss of lower face contour are effectively treated with botulinum toxin A (BoNT-A). However, its pars facialis, mandibularis, and modiolaris have been underappreciated. To demonstrate the role of BoNT-A treatment of the upper platysma and its impact on lower face dynamics and contour. Retrospective analysis of cases treated by an injection pattern encompassing the facial platysma components, aiming to block the lower face as a whole complex. It consisted of 2 intramuscular injections into the mentalis muscle and 2 horizontal lines of BoNT-A injections superficially performed above and below the mandible (total dose, 16 onabotulinumtoxinA U/side). Photographs were taken at rest and during motion (frontal and oblique views), before and after treatment. A total of 161 patients have been treated in the last 2 years with the following results: frontal and lateral enhancement of lower facial contour, relaxation of high horizontal lines located just below the lateral mandibular border, and lower deep vertical smile lines present lateral to the oral commissures and melomental folds. The upper platysma muscle plays a relevant role in the functional anatomy of the lower face that can be modulated safely with neuromodulators.

  16. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    ERIC Educational Resources Information Center

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  17. Facial nerve palsy associated with a cystic lesion of the temporal bone.

    PubMed

    Kim, Na Hyun; Shin, Seung-Ho

    2014-03-01

    Facial nerve palsy results in the loss of facial expression and is most commonly caused by a benign, self-limiting inflammatory condition known as Bell palsy. However, there are other conditions that may cause facial paralysis, such as neoplastic conditions of the facial nerve, traumatic nerve injury, and temporal bone lesions. We present a case of facial nerve palsy concurrent with a benign cystic lesion of the temporal bone, adjacent to the tympanic segment of the facial nerve. The patient's symptoms subsided after facial nerve decompression via a transmastoid approach.

  18. Contemporary solutions for the treatment of facial nerve paralysis.

    PubMed

    Garcia, Ryan M; Hadlock, Tessa A; Klebuc, Michael J; Simpson, Roger L; Zenn, Michael R; Marcus, Jeffrey R

    2015-06-01

    After reviewing this article, the participant should be able to: 1. Understand the most modern indications and technique for neurotization, including masseter-to-facial nerve transfer (fifth-to-seventh cranial nerve transfer). 2. Contrast the advantages and limitations associated with contiguous muscle transfers and free-muscle transfers for facial reanimation. 3. Understand the indications for a two-stage and one-stage free gracilis muscle transfer for facial reanimation. 4. Apply nonsurgical adjuvant treatments for acute facial nerve paralysis. Facial expression is a complex neuromotor and psychomotor process that is disrupted in patients with facial paralysis breaking the link between emotion and physical expression. Contemporary reconstructive options are being implemented in patients with facial paralysis. While static procedures provide facial symmetry at rest, true 'facial reanimation' requires restoration of facial movement. Contemporary treatment options include neurotization procedures (a new motor nerve is used to restore innervation to a viable muscle), contiguous regional muscle transfer (most commonly temporalis muscle transfer), microsurgical free muscle transfer, and nonsurgical adjuvants used to balance facial symmetry. Each approach has advantages and disadvantages along with ongoing controversies and should be individualized for each patient. Treatments for patients with facial paralysis continue to evolve in order to restore the complex psychomotor process of facial expression.

  19. A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.

    PubMed

    Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping

    2017-01-30

    Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Childhood Cumulative Risk Exposure and Adult Amygdala Volume and Function.

    PubMed

    Evans, Gary W; Swain, James E; King, Anthony P; Wang, Xin; Javanbakht, Arash; Ho, S Shaun; Angstadt, Michael; Phan, K Luan; Xie, Hong; Liberzon, Israel

    2016-06-01

    Considerable work indicates that early cumulative risk exposure is aversive to human development, but very little research has examined the neurological underpinnings of these robust findings. This study investigates amygdala volume and reactivity to facial stimuli among adults (mean 23.7 years of age, n = 54) as a function of cumulative risk exposure during childhood (9 and 13 years of age). In addition, we test to determine whether expected cumulative risk elevations in amygdala volume would mediate functional reactivity of the amygdala during socioemotional processing. Risks included substandard housing quality, noise, crowding, family turmoil, child separation from family, and violence. Total and left hemisphere adult amygdala volumes were positively related to cumulative risk exposure during childhood. The links between childhood cumulative risk exposure and elevated amygdala responses to emotionally neutral facial stimuli in adulthood were mediated by the corresponding amygdala volumes. Cumulative risk exposure in later adolescence (17 years of age), however, was unrelated to subsequent adult amygdala volume or function. Physical and socioemotional risk exposures early in life appear to alter amygdala development, rendering adults more reactive to ambiguous stimuli such as neutral faces. These stress-related differences in childhood amygdala development might contribute to the well-documented psychological distress as a function of early risk exposure. © 2015 Wiley Periodicals, Inc.

  1. Enhancing community knowledge and health behaviors to eliminate blinding trachoma in Mali using radio messaging as a strategy.

    PubMed

    Bamani, Sanoussi; Toubali, Emily; Diarra, Sadio; Goita, Seydou; Berté, Zana; Coulibaly, Famolo; Sangaré, Hama; Tuinsma, Marjon; Zhang, Yaobi; Dembelé, Benoit; Melvin, Palesa; MacArthur, Chad

    2013-04-01

    The National Blindness Prevention Program in Mali has broadcast messages on the radio about trachoma as part of the country's trachoma elimination strategy since 2008. In 2011, a radio impact survey using multi-stage cluster sampling was conducted in the regions of Kayes and Segou to assess radio listening habits, coverage of the broadcasts, community knowledge and behavior specific to trachoma and facial cleanliness of children. Radio access and listening were high, with 60% of respondents having heard a message on the radio about trachoma. The majority of respondents knew about trachoma, its root causes, its impact on health and prevention measures. Additionally, 66% reported washing their children's faces more than or equal to twice/day and 94% reported latrine disposal of feces. A high percentage of persons who gave a positive response to knowledge and behavior questions reported hearing the trachoma messages on the radio with 60% reporting that the radio is where they learned about trachoma. There was no significant difference in facial cleanliness when comparing children whose primary caregiver had/had not heard the trachoma messages. Next steps include revising the current messages to include more focused behavior change messaging and to engage in a more robust use of community radios.

  2. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  3. Objectifying facial expressivity assessment of Parkinson's patients: preliminary study.

    PubMed

    Wu, Peng; Gonzalez, Isabel; Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed.

  4. Facial approximation-from facial reconstruction synonym to face prediction paradigm.

    PubMed

    Stephan, Carl N

    2015-05-01

    Facial approximation was first proposed as a synonym for facial reconstruction in 1987 due to dissatisfaction with the connotations the latter label held. Since its debut, facial approximation's identity has morphed as anomalies in face prediction have accumulated. Now underpinned by differences in what problems are thought to count as legitimate, facial approximation can no longer be considered a synonym for, or subclass of, facial reconstruction. Instead, two competing paradigms of face prediction have emerged, namely: facial approximation and facial reconstruction. This paper shines a Kuhnian lens across the discipline of face prediction to comprehensively review these developments and outlines the distinguishing features between the two paradigms. © 2015 American Academy of Forensic Sciences.

  5. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    PubMed

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  7. Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features

    ERIC Educational Resources Information Center

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-01-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…

  8. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  9. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    PubMed

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  10. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    PubMed

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  11. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance

    PubMed Central

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18–25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects’ evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. In conclusion: 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population. PMID:26562655

  12. A patient with bilateral facial palsy associated with hypertension and chickenpox: learning points.

    PubMed

    Al-Abadi, Eslam; Milford, David V; Smith, Martin

    2010-11-26

    Bilateral facial nerve paralysis is an uncommon presentation and even more so in children. There are reports of different causes of bilateral facial nerve palsy. It is well-established that hypertension and chickenpox causes unilateral facial paralysis and the importance of checking the blood pressure in children with facial nerve paralysis cannot be stressed enough. The authors report a boy with bilateral facial nerve paralysis in association with hypertension and having recently recovered from chickenpox. The authors review aspects of bilateral facial nerve paralysis as well as hypertension and chickenpox causing facial nerve paralysis.

  13. A patient with bilateral facial palsy associated with hypertension and chickenpox: learning points

    PubMed Central

    Al-Abadi, Eslam; Milford, David V; Smith, Martin

    2010-01-01

    Bilateral facial nerve paralysis is an uncommon presentation and even more so in children. There are reports of different causes of bilateral facial nerve palsy. It is well-established that hypertension and chickenpox causes unilateral facial paralysis and the importance of checking the blood pressure in children with facial nerve paralysis cannot be stressed enough. The authors report a boy with bilateral facial nerve paralysis in association with hypertension and having recently recovered from chickenpox. The authors review aspects of bilateral facial nerve paralysis as well as hypertension and chickenpox causing facial nerve paralysis. PMID:22797481

  14. Facial nerve paralysis secondary to occult malignant neoplasms.

    PubMed

    Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J

    2004-04-01

    This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.

  15. [Facial palsy].

    PubMed

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  16. Associations between active trachoma and community intervention with Antibiotics, Facial cleanliness, and Environmental improvement (A,F,E).

    PubMed

    Ngondi, Jeremiah; Matthews, Fiona; Reacher, Mark; Baba, Samson; Brayne, Carol; Emerson, Paul

    2008-04-30

    Surgery, Antibiotics, Facial cleanliness and Environmental improvement (SAFE) are advocated by the World Health Organization (WHO) for trachoma control. However, few studies have evaluated the complete SAFE strategy, and of these, none have investigated the associations of Antibiotics, Facial cleanliness, and Environmental improvement (A,F,E) interventions and active trachoma. We aimed to investigate associations between active trachoma and A,F,E interventions in communities in Southern Sudan. Surveys were undertaken in four districts after 3 years of implementation of the SAFE strategy. Children aged 1-9 years were examined for trachoma and uptake of SAFE assessed through interviews and observations. Using ordinal logistic regression, associations between signs of active trachoma and A,F,E interventions were explored. Trachomatous inflammation-intense (TI) was considered more severe than trachomatous inflammation-follicular (TF). A total of 1,712 children from 25 clusters (villages) were included in the analysis. Overall uptake of A,F,E interventions was: 53.0% of the eligible children had received at least one treatment with azithromycin; 62.4% children had a clean face on examination; 72.5% households reported washing faces of children two or more times a day; 73.1% households had received health education; 44.4% of households had water accessible within 30 minutes; and 6.3% households had pit latrines. Adjusting for age, sex, and district baseline prevalence of active trachoma, factors independently associated with reduced odds of a more severe active trachoma sign were: receiving three treatments with azithromycin (odds ratio [OR] = 0.1; 95% confidence interval [CI] 0.0-0.4); clean face (OR = 0.3; 95% CI 0.2-0.4); washing faces of children three or more times daily (OR = 0.4; 95% CI 0.3-0.7); and presence and use of a pit latrine in the household (OR = 0.4; 95% CI 0.2-0.9). Analysis of associations between the A,F,E components of the SAFE strategy and active trachoma showed independent protective effects against active trachoma of mass systemic azithromycin treatment, facial cleanliness, face washing, and use of pit latrines in the household. This strongly argues for continued use of all the components of the SAFE strategy together.

  17. Association Among Facial Paralysis, Depression, and Quality of Life in Facial Plastic Surgery Patients

    PubMed Central

    Nellis, Jason C.; Ishii, Masaru; Byrne, Patrick J.; Boahene, Kofi D. O.; Dey, Jacob K.; Ishii, Lisa E.

    2017-01-01

    IMPORTANCE Though anecdotally linked, few studies have investigated the impact of facial paralysis on depression and quality of life (QOL). OBJECTIVE To measure the association between depression, QOL, and facial paralysis in patients seeking treatment at a facial plastic surgery clinic. DESIGN, SETTING, PARTICIPANTS Data were prospectively collected for patients with all-cause facial paralysis and control patients initially presenting to a facial plastic surgery clinic from 2013 to 2015. The control group included a heterogeneous patient population presenting to facial plastic surgery clinic for evaluation. Patients who had prior facial reanimation surgery or missing demographic and psychometric data were excluded from analysis. MAIN OUTCOMES AND MEASURES Demographics, facial paralysis etiology, facial paralysis severity (graded on the House-Brackmann scale), Beck depression inventory, and QOL scores in both groups were examined. Potential confounders, including self-reported attractiveness and mood, were collected and analyzed. Self-reported scores were measured using a 0 to 100 visual analog scale. RESULTS There was a total of 263 patients (mean age, 48.8 years; 66.9% were female) were analyzed. There were 175 control patients and 88 patients with facial paralysis. Sex distributions were not significantly different between the facial paralysis and control groups. Patients with facial paralysis had significantly higher depression, lower self-reported attractiveness, lower mood, and lower QOL scores. Overall, 37 patients with facial paralysis (42.1%) screened positive for depression, with the greatest likelihood in patients with House-Brackmann grade 3 or greater (odds ratio, 10.8; 95% CI, 5.13–22.75) compared with 13 control patients (8.1%) (P < .001). In multivariate regression, facial paralysis and female sex were significantly associated with higher depression scores (constant, 2.08 [95% CI, 0.77–3.39]; facial paralysis effect, 5.98 [95% CI, 4.38–7.58]; female effect, 1.95 [95% CI, 0.65–3.25]). Facial paralysis was associated with lower QOL scores (constant, 81.62 [95% CI, 78.98–84.25]; facial paralysis effect, −16.06 [95% CI, −20.50 to −11.62]). CONCLUSIONS AND RELEVANCE For treatment-seeking patients, facial paralysis was significantly associated with increased depression and worse QOL scores. In addition, female sex was significantly associated with increased depression scores. Moreover, patients with a greater severity of facial paralysis were more likely to screen positive for depression. Clinicians initially evaluating patients should consider the psychological impact of facial paralysis to optimize care. LEVEL OF EVIDENCE 2. PMID:27930763

  18. Robust and Soft Elastomeric Electronics Tolerant to Our Daily Lives.

    PubMed

    Sekiguchi, Atsuko; Tanaka, Fumiaki; Saito, Takeshi; Kuwahara, Yuki; Sakurai, Shunsuke; Futaba, Don N; Yamada, Takeo; Hata, Kenji

    2015-09-09

    Clothes represent a unique textile, as they simultaneously provide robustness against our daily activities and comfort (i.e., softness). For electronic devices to be fully integrated into clothes, the devices themselves must be as robust and soft as the clothes themselves. However, to date, no electronic device has ever possessed these properties, because all contain components fabricated from brittle materials, such as metals. Here, we demonstrate robust and soft elastomeric devices where every component possesses elastomeric characteristics with two types of single-walled carbon nanotubes added to provide the necessary electronic properties. Our elastomeric field effect transistors could tolerate every punishment our clothes experience, such as being stretched (elasticity: ∼ 110%), bent, compressed (>4.0 MPa, by a car and heels), impacted (>6.26 kg m/s, by a hammer), and laundered. Our electronic device provides a novel design principle for electronics and wide range applications even in research fields where devices cannot be used.

  19. The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.

    PubMed

    Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S

    2018-04-01

    This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n  = 192) and 10.3% were female ( n  = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n  = 70), 25% were aged 41 to 50 years ( n  = 53), 21.4% were aged 51 to 60 years ( n  = 46), and 20.5% were older than 60 years ( n  = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Objectifying Facial Expressivity Assessment of Parkinson's Patients: Preliminary Study

    PubMed Central

    Patsis, Georgios; Jiang, Dongmei; Sahli, Hichem; Kerckhofs, Eric; Vandekerckhove, Marie

    2014-01-01

    Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recognizing facial action units (AUs) and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear) were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG) and electrocardiogram (ECG)) and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant's self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson's disease have been observed. PMID:25478003

  1. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    PubMed Central

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  2. Positive association between vocal and facial attractiveness in women but not in men: A cross-cultural study.

    PubMed

    Valentova, Jaroslava Varella; Varella, Marco Antonio Corrêa; Havlíček, Jan; Kleisner, Karel

    2017-02-01

    Various species use multiple sensory modalities in the communication processes. In humans, female facial appearance and vocal display are correlated and it has been suggested that they serve as redundant markers indicating the bearer's reproductive potential and/or residual fertility. In men, evidence for redundancy of facial and vocal attractiveness is ambiguous. We tested the redundancy/multiple signals hypothesis by correlating perceived facial and vocal attractiveness in men and women from two different populations, Brazil and the Czech Republic. We also investigated whether facial and vocal attractiveness are linked to facial morphology. Standardized facial pictures and vocal samples of 86 women (47 from Brazil) and 81 men (41 from Brazil), aged 18-35, were rated for attractiveness by opposite-sex raters. Facial and vocal attractiveness were found to positively correlate in women but not in men. We further applied geometric morphometrics and regressed facial shape coordinates on facial and vocal attractiveness ratings. In women, facial shape was linked to their facial attractiveness but there was no association between facial shape and vocal attractiveness. In men, none of these associations was significant. Having shown that women with more attractive faces possess also more attractive voices, we thus only partly supported the redundant signal hypothesis. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. The Motivational Salience of Faces Is Related to Both Their Valence and Dominance.

    PubMed

    Wang, Hongyi; Hahn, Amanda C; DeBruine, Lisa M; Jones, Benedict C

    2016-01-01

    Both behavioral and neural measures of the motivational salience of faces are positively correlated with their physical attractiveness. Whether physical characteristics other than attractiveness contribute to the motivational salience of faces is not known, however. Research with male macaques recently showed that more dominant macaques' faces hold greater motivational salience. Here we investigated whether dominance also contributes to the motivational salience of faces in human participants. Principal component analysis of third-party ratings of faces for multiple traits revealed two orthogonal components. The first component ("valence") was highly correlated with rated trustworthiness and attractiveness. The second component ("dominance") was highly correlated with rated dominance and aggressiveness. Importantly, both components were positively and independently related to the motivational salience of faces, as assessed from responses on a standard key-press task. These results show that at least two dissociable components underpin the motivational salience of faces in humans and present new evidence for similarities in how humans and non-human primates respond to facial cues of dominance.

  4. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma

    PubMed Central

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.

    2014-01-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy. PMID:25083397

  5. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma.

    PubMed

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W

    2014-08-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.

  6. Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions.

    PubMed

    Moriya, Jun; Tanno, Yoshihiko; Sugiura, Yoshinori

    2013-11-01

    This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was presented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of presentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone individuals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals' angry faces. Further, after being exposed to the faces of happiness-prone individuals, the participants became less sensitive to those individuals' happy faces. We also found a relative increase in the social desirability of happiness-prone individuals after exposure to the facial stimuli.

  7. The effects of facial adiposity on attractiveness and perceived leadership ability.

    PubMed

    Re, Daniel E; Perrett, David I

    2014-01-01

    Facial attractiveness has a positive influence on electoral success both in experimental paradigms and in the real world. One parameter that influences facial attractiveness and social judgements is facial adiposity (a facial correlate to body mass index, BMI). Overweight people have high facial adiposity and are perceived to be less attractive and lower in leadership ability. Here, we used an interactive design in order to assess whether the most attractive level of facial adiposity is also perceived as most leader-like. We found that participants reduced facial adiposity more to maximize attractiveness than to maximize perceived leadership ability. These results indicate that facial appearance impacts leadership judgements beyond the effects of attractiveness. We suggest that the disparity between optimal facial adiposity in attractiveness and leadership judgements stems from social trends that have produced thin ideals for attractiveness, while leadership judgements are associated with perception of physical dominance.

  8. When your face describes your memories: facial expressions during retrieval of autobiographical memories.

    PubMed

    El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis

    2018-05-11

    Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.

  9. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  10. Psilocybin biases facial recognition, goal-directed behavior, and mood state toward positive relative to negative emotions through different serotonergic subreceptors.

    PubMed

    Kometer, Michael; Schmidt, André; Bachmann, Rosilla; Studerus, Erich; Seifritz, Erich; Vollenweider, Franz X

    2012-12-01

    Serotonin (5-HT) 1A and 2A receptors have been associated with dysfunctional emotional processing biases in mood disorders. These receptors further predominantly mediate the subjective and behavioral effects of psilocybin and might be important for its recently suggested antidepressive effects. However, the effect of psilocybin on emotional processing biases and the specific contribution of 5-HT2A receptors across different emotional domains is unknown. In a randomized, double-blind study, 17 healthy human subjects received on 4 separate days placebo, psilocybin (215 μg/kg), the preferential 5-HT2A antagonist ketanserin (50 mg), or psilocybin plus ketanserin. Mood states were assessed by self-report ratings, and behavioral and event-related potential measurements were used to quantify facial emotional recognition and goal-directed behavior toward emotional cues. Psilocybin enhanced positive mood and attenuated recognition of negative facial expression. Furthermore, psilocybin increased goal-directed behavior toward positive compared with negative cues, facilitated positive but inhibited negative sequential emotional effects, and valence-dependently attenuated the P300 component. Ketanserin alone had no effects but blocked the psilocybin-induced mood enhancement and decreased recognition of negative facial expression. This study shows that psilocybin shifts the emotional bias across various psychological domains and that activation of 5-HT2A receptors is central in mood regulation and emotional face recognition in healthy subjects. These findings may not only have implications for the pathophysiology of dysfunctional emotional biases but may also provide a framework to delineate the mechanisms underlying psylocybin's putative antidepressant effects. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  11. Prelaminated extended temporoparietal fascia flap without tissue expansion for hemifacial reconstruction.

    PubMed

    Altındaş, Muzaffer; Arslan, Hakan; Bingöl, Uğur Anıl; Demiröz, Anıl

    2017-10-01

    Disfigurement of the face caused by postburn scars, resected congenital nevi and vascular malformations has both functional and psychological consequences. Ideal reconstruction of the facial components requires producing not only function but also the better appearance of the face. The skin of the neck, supraclavicular or cervicothoracic regions are the most commonly used and the most likely source of skin for facial reconstruction in those techniques which prefabrications with tissue expansion are used. This retrospective cohort study describes the two staged prelaminated temporoparietal fascia flap which eliminates the usage of tissue expansion by using skin graft harvested from the neck and occipital region and the application of this flap for the lower three-fourths of the face. 5 patients received prelaminated temporoparietal fascia flap without tissue expansion for facial resurfacing. The mean age at surgery was 39, 2 years (range, 17-60 years). The average follow up was 21.6 months (range, 10-48 months). The size of the raised prelaminated temporoparietal fascia flaps ranged from 9 × 8 cm to 14 × 10 cm. All flaps survived after second stage. Varied degrees of venous congestion were observed after flap insets in all cases but none required any further treatment for the congestion. The entire lesion could not be resected due to the large size of the lesion in all patients. Two stage prelaminated temporoparietal fascia flap with skin graft is an effective technique for the reconstruction of partial facial defects in selected patients. It is simple, quick, safe and reliable, and requires no expansion of skin or no microsurgery. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Facial profile parameters and their relative influence on bilabial prominence and the perceptions of facial profile attractiveness: A novel approach

    PubMed Central

    Denize, Erin Stewart; McDonald, Fraser; Sherriff, Martyn

    2014-01-01

    Objective To evaluate the relative importance of bilabial prominence in relation to other facial profile parameters in a normal population. Methods Profile stimulus images of 38 individuals (28 female and 10 male; ages 19-25 years) were shown to an unrelated group of first-year students (n = 42; ages 18-24 years). The images were individually viewed on a 17-inch monitor. The observers received standardized instructions before viewing. A six-question questionnaire was completed using a Likert-type scale. The responses were analyzed by ordered logistic regression to identify associations between profile characteristics and observer preferences. The Bayesian Information Criterion was used to select variables that explained observer preferences most accurately. Results Nasal, bilabial, and chin prominences; the nasofrontal angle; and lip curls had the greatest effect on overall profile attractiveness perceptions. The lip-chin-throat angle and upper lip curl had the greatest effect on forehead prominence perceptions. The bilabial prominence, nasolabial angle (particularly the lower component), and mentolabial angle had the greatest effect on nasal prominence perceptions. The bilabial prominence, nasolabial angle, chin prominence, and submental length had the greatest effect on lip prominence perceptions. The bilabial prominence, nasolabial angle, mentolabial angle, and submental length had the greatest effect on chin prominence perceptions. Conclusions More prominent lips, within normal limits, may be considered more attractive in the profile view. Profile parameters have a greater influence on their neighboring aesthetic units but indirectly influence related profile parameters, endorsing the importance of achieving an aesthetic balance between relative prominences of all aesthetic units of the facial profile. PMID:25133133

  13. African ancestry is associated with facial melasma in women: a cross-sectional study.

    PubMed

    D'Elia, Maria Paula Barbieri; Brandão, Marcela Calixto; de Andrade Ramos, Bruna Ribeiro; da Silva, Márcia Guimarães; Miot, Luciane Donida Bartoli; Dos Santos, Sidney Emanuel Batista; Miot, Hélio Amante

    2017-02-17

    Melasma is a chronic acquired focal hypermelanosis affecting photoexposed areas, especially for women during fertile age. Several factors contribute to its development: sun exposure, sex steroids, medicines, and family history. Melanic pigmentation pathway discloses several SNPs in different populations. Here, we evaluated the association between genetic ancestry and facial melasma. A cross-sectional study involving women with melasma and an age-matched control group from outpatients at FMB-Unesp, Botucatu-SP, Brazil was performed. DNA was extracted from oral mucosa swabs and ancestry determined by studying 61 INDELs. The genetic ancestry components were adjusted by other known risk factors by multiple logistic regression. We evaluated 119 women with facial melasma and 119 controls. Mean age was 39 ± 9 years. Mean age at beginning of disease was 27 ± 8 years. Pregnancy (40%), sun exposure (37%), and hormonal oral contraception (22%) were the most frequently reported melasma triggers. All subjects presented admixed ancestry, African and European genetic contributions were significantly different between cases and controls (respectively 10% vs 6%; 77% vs 82%; p < 0.05). African ancestry (OR = 1.04; 95% CI 1.01 to 1.07), first generation family history (OR = 3.04; 95% CI 1.56 to 5.94), low education level (OR = 4.04; 95% CI 1.56 to 5.94), and use of antidepressants by individuals with affected family members (OR = 6.15; 95% CI 1.13 to 33.37) were associated with melasma, independently of other known risk factors. Facial melasma was independently associated with African ancestry in a highly admixed population.

  14. Facial exposure to ultraviolet radiation: Predicted sun protection effectiveness of various hat styles.

    PubMed

    Backes, C; Religi, A; Moccozet, L; Vuilleumier, L; Vernez, D; Bulliard, J-L

    2018-04-23

    Solar ultraviolet radiation (UVR) doses received by individuals are highly influenced by behavioural and environmental factors. This study aimed at quantifying hats' sun protection effectiveness in various exposure conditions, by predicting UVR exposure doses and their anatomical distributions. A well-defined three-dimensional head morphology and four hat styles (a cap, a helmet, a middle- and a wide-brimmed hat) were added to a previously published model. Midday (12:00-14:00) and daily (08:00 - 17:00) seasonal UVR doses were estimated at various facial skin zones, with and without hat-wear, accounting for each UVR component. Protection effectiveness was calculated by the relative reduction of predicted UVR dose, expressed as a predictive protection factor (PPF). The unprotected entire face received 2.5 times higher UVR doses during a summer midday compared to a winter midday (3.3 vs. 1.3 SED) with highest doses received at the nose (6.1 SED). During a cloudless summer day, the lowest mean UVR dose is received by the entire face protected by a wide-brimmed hat (1.7 SED). No hat reached 100% protection at any facial skin zone (PPF max : 76%). Hats' sun protection effectiveness varied highly with environmental conditions and were mainly limited by the high contribution of diffuse UVR, irrespective of hat style. Larger brim sizes afforded greater facial protection than smaller brim sizes except around midday when the sun position is high. Consideration of diffuse and reflected UVR in sun educational messages could improve sun protection effectiveness. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Awareness of malocclusion and desire for orthodontic treatment in 11 to 14 year-old Nigerian schoolchildren and their parents.

    PubMed

    Kolawole, Kikelomo A; Otuyemi, Olayinka D; Jeboda, Sonny O; Umweni, Alice A

    2008-05-01

    To investigate Nigerian children's and their parents' satisfaction with the children's facial and dental appearances and their desire for orthodontic treatment, and to compare their assessments of treatment need with those determined by an orthodontist. The subjects were 242 11-14 year-old schoolchildren randomly selected from private and public schools in the Ife Central Local Government area, Nigeria. A questionnaire was used to obtain information from the children and their parents of their liking of the facial and dental appearances of the children and the need and desire for orthodontic treatment. The children, the parents and an orthodontist used the Aesthetic Component (AC) of the Index of Orthodontic Treatment Need (IOTN) to assess the need for treatment. More parents liked their child's facial and dental appearances than the children liked themselves. Almost twice as many schoolchildren thought they needed (27 per cent) and desired (29 per cent) orthodontic treatment than their parents (115 per cent). Low, but statistically significant correlations, were found between the children's, their parents' and an orthodontist's assessments of treatment need using the AC scale. Only 8 per cent of the children and 3 per cent of the parents considered that there was a 'moderate-definite' need of treatment. The orthodontist considered 38 per cent of the children had a 'moderate-definite' need of treatment. The children's responses suggest greater concern about their facial and dental appearances, a greater perception of the need for orthodontic treatment and greater desire for treatment than their parents. These findings require further investigation as previous studies have reported that parents are usually more motivated for orthodontic treatment than their children.

  16. Developmental and Evolutionary Significance of the Zygomatic Bone.

    PubMed

    Heuzé, Yann; Kawasaki, Kazuhiko; Schwarz, Tobias; Schoenebeck, Jeffrey J; Richtsmeier, Joan T

    2016-12-01

    The zygomatic bone is derived evolutionarily from the orbital series. In most modern mammals the zygomatic bone forms a large part of the face and usually serves as a bridge that connects the facial skeleton to the neurocranium. Our aim is to provide information on the contribution of the zygomatic bone to variation in midfacial protrusion using three samples; humans, domesticated dogs, and monkeys. In each case, variation in midface protrusion is a heritable trait produced by one of three classes of transmission: localized dysmorphology associated with single gene dysfunction, selective breeding, or long-term evolution from a common ancestor. We hypothesize that the shape of the zygomatic bone reflects its role in stabilizing the connection between facial skeleton and neurocranium and consequently, changes in facial protrusion are more strongly reflected by the maxilla and premaxilla. Our geometric morphometric analyses support our hypothesis suggesting that the shape of the zygomatic bone has less to do with facial protrusion. By morphometrically dissecting the zygomatic bone we have determined a degree of modularity among parts of the midfacial skeleton suggesting that these components have the ability to vary independently and thus can evolve differentially. From these purely morphometric data, we propose that the neural crest cells that are fated to contribute to the zygomatic bone experience developmental cues that distinguish them from the maxilla and premaxilla. The spatiotemporal and molecular identity of the cues that impart zygoma progenitors with their identity remains an open question that will require alternative data sets. Anat Rec, 299:1616-1630, 2016. © 2016 The Authors The Anatomical Record Published by Wiley Periodicals, Inc. © 2016 The Authors The Anatomical Record Published by Wiley Periodicals, Inc.

  17. Lower incisor dentoalveolar compensation and symphysis dimensions among Class I and III malocclusion patients with different facial vertical skeletal patterns.

    PubMed

    Molina-Berlanga, Núria; Llopis-Perez, Jaume; Flores-Mir, Carlos; Puigdollers, Andreu

    2013-11-01

    To compare lower incisor dentoalveolar compensation and mandible symphysis morphology among Class I and Class III malocclusion patients with different facial vertical skeletal patterns. Lower incisor extrusion and inclination, as well as buccal (LA) and lingual (LP) cortex depth, and mandibular symphysis height (LH) were measured in 107 lateral cephalometric x-rays of adult patients without prior orthodontic treatment. In addition, malocclusion type (Class I or III) and facial vertical skeletal pattern were considered. Through a principal component analysis (PCA) related variables were reduced. Simple regression equation and multivariate analyses of variance were also used. Incisor mandibular plane angle (P < .001) and extrusion (P  =  .03) values showed significant differences between the sagittal malocclusion groups. Variations in the mandibular plane have a negative correlation with LA (Class I P  =  .03 and Class III P  =  .01) and a positive correlation with LH (Class I P  =  .01 and Class III P  =  .02) in both groups. Within the Class III group, there was a negative correlation between the mandibular plane and LP (P  =  .02). PCA showed that the tendency toward a long face causes the symphysis to elongate and narrow. In Class III, alveolar narrowing is also found in normal faces. Vertical facial pattern is a significant factor in mandibular symphysis alveolar morphology and lower incisor positioning, both for Class I and Class III patients. Short-faced Class III patients have a widened alveolar bone. However, for long-faced and normal-faced Class III, natural compensation elongates the symphysis and influences lower incisor position.

  18. Hypoglossal-facial nerve "side"-to-side neurorrhaphy for facial paralysis resulting from closed temporal bone fractures.

    PubMed

    Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song

    2018-06-06

    Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal regrowth in a rich supportive environment.

  19. Interference among the Processing of Facial Emotion, Face Race, and Face Gender.

    PubMed

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender).

  20. Interference among the Processing of Facial Emotion, Face Race, and Face Gender

    PubMed Central

    Li, Yongna; Tse, Chi-Shing

    2016-01-01

    People can process multiple dimensions of facial properties simultaneously. Facial processing models are based on the processing of facial properties. The current study examined the processing of facial emotion, face race, and face gender using categorization tasks. The same set of Chinese, White and Black faces, each posing a neutral, happy or angry expression, was used in three experiments. Facial emotion interacted with face race in all the tasks. The interaction of face race and face gender was found in the race and gender categorization tasks, whereas the interaction of facial emotion and face gender was significant in the emotion and gender categorization tasks. These results provided evidence for a symmetric interaction between variant facial properties (emotion) and invariant facial properties (race and gender). PMID:27840621

Top