Sample records for facial image analysis

  1. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  2. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  3. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  4. IntraFace

    PubMed Central

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2016-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987

  5. IntraFace.

    PubMed

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  6. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    PubMed Central

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  7. Spoofing detection on facial images recognition using LBP and GLCM combination

    NASA Astrophysics Data System (ADS)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  8. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  9. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    PubMed

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis.

    PubMed

    Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp

    2017-01-01

    Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3-10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception.

  11. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  12. Evaluation of facial expression in acute pain in cats.

    PubMed

    Holden, E; Calvo, G; Collins, M; Bell, A; Reid, J; Scott, E M; Nolan, A M

    2014-12-01

    To describe the development of a facial expression tool differentiating pain-free cats from those in acute pain. Observers shown facial images from painful and pain-free cats were asked to identify if they were in pain or not. From facial images, anatomical landmarks were identified and distances between these were mapped. Selected distances underwent statistical analysis to identify features discriminating pain-free and painful cats. Additionally, thumbnail photographs were reviewed by two experts to identify discriminating facial features between the groups. Observers (n = 68) had difficulty in identifying pain-free from painful cats, with only 13% of observers being able to discriminate more than 80% of painful cats. Analysis of 78 facial landmarks and 80 distances identified six significant factors differentiating pain-free and painful faces including ear position and areas around the mouth/muzzle. Standardised mouth and ear distances when combined showed excellent discrimination properties, correctly differentiating pain-free and painful cats in 98% of cases. Expert review supported these findings and a cartoon-type picture scale was developed from thumbnail images. Initial investigation into facial features of painful and pain-free cats suggests potentially good discrimination properties of facial images. Further testing is required for development of a clinical tool. © 2014 British Small Animal Veterinary Association.

  13. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  14. Signatures of personality on dense 3D facial images.

    PubMed

    Hu, Sile; Xiong, Jieyi; Fu, Pengcheng; Qiao, Lu; Tan, Jingze; Jin, Li; Tang, Kun

    2017-03-06

    It has long been speculated that cues on the human face exist that allow observers to make reliable judgments of others' personality traits. However, direct evidence of association between facial shapes and personality is missing from the current literature. This study assessed the personality attributes of 834 Han Chinese volunteers (405 males and 429 females), utilising the five-factor personality model ('Big Five'), and collected their neutral 3D facial images. Dense anatomical correspondence was established across the 3D facial images in order to allow high-dimensional quantitative analyses of the facial phenotypes. In this paper, we developed a Partial Least Squares (PLS) -based method. We used composite partial least squares component (CPSLC) to test association between the self-tested personality scores and the dense 3D facial image data, then used principal component analysis (PCA) for further validation. Among the five personality factors, agreeableness and conscientiousness in males and extraversion in females were significantly associated with specific facial patterns. The personality-related facial patterns were extracted and their effects were extrapolated on simulated 3D facial models.

  15. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  16. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  17. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis

    PubMed Central

    Mayer, Christine; Windhager, Sonja; Schaefer, Katrin; Mitteroecker, Philipp

    2017-01-01

    Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI) and waist-to-hip ratio (WHR) with facial shape and texture (color pattern) in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3–10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception. PMID:28052103

  18. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  19. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  20. A novel method to measure conspicuous facial pores using computer analysis of digital-camera-captured images: the effect of glycolic acid chemical peeling.

    PubMed

    Kakudo, Natsuko; Kushida, Satoshi; Tanaka, Nobuko; Minakata, Tatsuya; Suzuki, Kenji; Kusumoto, Kenji

    2011-11-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological esthetic surgery. Conspicuous facial pores are one of the most frequently encountered skin problems in women of all ages. This study was performed to analyze the effectiveness of reducing conspicuous facial pores using glycolic acid chemical peeling (GACP) based on a novel computer analysis of digital-camera-captured images. GACP was performed a total of five times at 2-week intervals in 22 healthy women. Computerized image analysis of conspicuous, open, and darkened facial pores was performed using the Robo Skin Analyzer CS 50. The number of conspicuous facial pores decreased significantly in 19 (86%) of the 22 subjects, with a mean improvement rate of 34.6%. The number of open pores decreased significantly in 16 (72%) of the subjects, with a mean improvement rate of 11.0%. The number of darkened pores decreased significantly in 18 (81%) of the subjects, with a mean improvement rate of 34.3%. GACP significantly reduces the number of conspicuous facial pores. The Robo Skin Analyzer CS 50 is useful for the quantification and analysis of 'pore enlargement', a subtle finding in dermatological esthetic surgery. © 2011 John Wiley & Sons A/S.

  1. Brain Responses to Dynamic Facial Expressions: A Normative Meta-Analysis.

    PubMed

    Zinchenko, Oksana; Yaple, Zachary A; Arsalidou, Marie

    2018-01-01

    Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI) studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos) are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.

  2. A new quantitative evaluation method for age-related changes of individual pigmented spots in facial skin.

    PubMed

    Kikuchi, K; Masuda, Y; Yamashita, T; Sato, K; Katagiri, C; Hirao, T; Mizokami, Y; Yaguchi, H

    2016-08-01

    Facial skin pigmentation is one of the most prominent visible features of skin aging and often affects perception of health and beauty. To date, facial pigmentation has been evaluated using various image analysis methods developed for the cosmetic and esthetic fields. However, existing methods cannot provide precise information on pigmented spots, such as variations in size, color shade, and distribution pattern. The purpose of this study is the development of image evaluation methods to analyze individual pigmented spots and acquire detailed information on their age-related changes. To characterize the individual pigmented spots within a cheek image, we established a simple object-counting algorithm. First, we captured cheek images using an original imaging system equipped with an illumination unit and a high-resolution digital camera. The acquired images were converted into melanin concentration images using compensation formulae. Next, the melanin images were converted into binary images. The binary images were then subjected to noise reduction. Finally, we calculated parameters such as the melanin concentration, quantity, and size of individual pigmented spots using a connected-components labeling algorithm, which assigns a unique label to each separate group of connected pixels. The cheek image analysis was evaluated on 643 female Japanese subjects. We confirmed that the proposed method was sufficiently sensitive to measure the melanin concentration, and the numbers and sizes of individual pigmented spots through manual evaluation of the cheek images. The image analysis results for the 643 Japanese women indicated clear relationships between age and the changes in the pigmented spots. We developed a new quantitative evaluation method for individual pigmented spots in facial skin. This method facilitates the analysis of the characteristics of various pigmented facial spots and is directly applicable to the fields of dermatology, pharmacology, and esthetic cosmetology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Facial fluid synthesis for assessment of acne vulgaris using luminescent visualization system through optical imaging and integration of fluorescent imaging system

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.

    2017-06-01

    Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.

  4. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  5. Illuminant color estimation based on pigmentation separation from human skin color

    NASA Astrophysics Data System (ADS)

    Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi

    2015-03-01

    Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.

  6. Recognizing Action Units for Facial Expression Analysis

    PubMed Central

    Tian, Ying-li; Kanade, Takeo; Cohn, Jeffrey F.

    2010-01-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams. PMID:25210210

  7. Quantitative Anthropometric Measures of Facial Appearance of Healthy Hispanic/Latino White Children: Establishing Reference Data for Care of Cleft Lip With or Without Cleft Palate

    NASA Astrophysics Data System (ADS)

    Lee, Juhun; Ku, Brian; Combs, Patrick D.; Da Silveira, Adriana. C.; Markey, Mia K.

    2017-06-01

    Cleft lip with or without cleft palate (CL ± P) is one of the most common congenital facial deformities worldwide. To minimize negative social consequences of CL ± P, reconstructive surgery is conducted to modify the face to a more normal appearance. Each race/ethnic group requires its own facial norm data, yet there are no existing facial norm data for Hispanic/Latino White children. The objective of this paper is to identify measures of facial appearance relevant for planning reconstructive surgery for CL ± P of Hispanic/Latino White children. Quantitative analysis was conducted on 3D facial images of 82 (41 girls, 41 boys) healthy Hispanic/Latino White children whose ages ranged from 7 to 12 years. Twenty-eight facial anthropometric features related to CL ± P (mainly in the nasal and mouth area) were measured from 3D facial images. In addition, facial aesthetic ratings were obtained from 16 non-clinical observers for the same 3D facial images using a 7-point Likert scale. Pearson correlation analysis was conducted to find features that were correlated with the panel ratings of observers. Boys with a longer face and nose, or thicker upper and lower lips are considered more attractive than others while girls with a less curved middle face contour are considered more attractive than others. Associated facial landmarks for these features are primary focus areas for reconstructive surgery for CL ± P. This study identified anthropometric measures of facial features of Hispanic/Latino White children that are pertinent to CL ± P and which correlate with the panel attractiveness ratings.

  8. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  9. Automated diagnosis of fetal alcohol syndrome using 3D facial image analysis

    PubMed Central

    Fang, Shiaofen; McLaughlin, Jason; Fang, Jiandong; Huang, Jeffrey; Autti-Rämö, Ilona; Fagerlund, Åse; Jacobson, Sandra W.; Robinson, Luther K.; Hoyme, H. Eugene; Mattson, Sarah N.; Riley, Edward; Zhou, Feng; Ward, Richard; Moore, Elizabeth S.; Foroud, Tatiana

    2012-01-01

    Objectives Use three-dimensional (3D) facial laser scanned images from children with fetal alcohol syndrome (FAS) and controls to develop an automated diagnosis technique that can reliably and accurately identify individuals prenatally exposed to alcohol. Methods A detailed dysmorphology evaluation, history of prenatal alcohol exposure, and 3D facial laser scans were obtained from 149 individuals (86 FAS; 63 Control) recruited from two study sites (Cape Town, South Africa and Helsinki, Finland). Computer graphics, machine learning, and pattern recognition techniques were used to automatically identify a set of facial features that best discriminated individuals with FAS from controls in each sample. Results An automated feature detection and analysis technique was developed and applied to the two study populations. A unique set of facial regions and features were identified for each population that accurately discriminated FAS and control faces without any human intervention. Conclusion Our results demonstrate that computer algorithms can be used to automatically detect facial features that can discriminate FAS and control faces. PMID:18713153

  10. Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.

    PubMed

    Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A

    2017-12-01

    To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  11. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  12. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  13. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  14. Bilateral cleft lip and palate: A morphometric analysis of facial skeletal form using cone beam computed tomography.

    PubMed

    Starbuck, John M; Ghoneima, Ahmed; Kula, Katherine

    2015-07-01

    Bilateral cleft lip and palate (BCLP) is caused by a lack of merging of maxillary and nasal facial prominences during development and morphogenesis. BCLP is associated with congenital defects of the oronasal facial region that can impair ingestion, mastication, speech, and dentofacial development. Using cone beam computed tomography (CBCT) images, 7- to 18-year old individuals born with BCLP (n = 15) and age- and sex-matched controls (n = 15) were retrospectively assessed. Coordinate values of three-dimensional facial skeletal anatomical landmarks (n = 32) were measured from each CBCT image. Data were evaluated using principal coordinates analysis (PCOORD) and Euclidean Distance Matrix Analysis (EDMA). PCOORD axes 1-3 explain approximately 45% of the morphological variation between samples, and specific patterns of morphological differences were associated with each axis. Approximately, 30% of facial skeletal measures significantly differ by confidence interval testing (α = 0.10) between samples. While significant form differences occur across the facial skeleton, strong patterns of differences are localized to the lateral and superioinferior aspects of the nasal aperture. In conclusion, the BCLP deformity significantly alters facial skeletal morphology of the midface and oronasal regions of the face, but morphological differences were also found in the upper facial skeleton and to a lesser extent, the lower facial skeleton. This pattern of strong differences in the oronasal region of the facial skeleton combined with differences across the rest of the facial complex underscores the idea that bones of the craniofacial skeleton are integrated. © 2015 Wiley Periodicals, Inc.

  15. Comparison of different methods for gender estimation from face image of various poses

    NASA Astrophysics Data System (ADS)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  16. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  17. Judgment of Nasolabial Esthetics in Cleft Lip and Palate Is Not Influenced by Overall Facial Attractiveness.

    PubMed

    Kocher, Katharina; Kowalski, Piotr; Kolokitha, Olga-Elpis; Katsaros, Christos; Fudalej, Piotr S

    2016-05-01

    To determine whether judgment of nasolabial esthetics in cleft lip and palate (CLP) is influenced by overall facial attractiveness. Experimental study. University of Bern, Switzerland. Seventy-two fused images (36 of boys, 36 of girls) were constructed. Each image comprised (1) the nasolabial region of a treated child with complete unilateral CLP (UCLP) and (2) the external facial features, i.e., the face with masked nasolabial region, of a noncleft child. Photographs of the nasolabial region of six boys and six girls with UCLP representing a wide range of esthetic outcomes, i.e., from very good to very poor appearance, were randomly chosen from a sample of 60 consecutively treated patients in whom nasolabial esthetics had been rated in a previous study. Photographs of external facial features of six boys and six girls without UCLP with various esthetics were randomly selected from patients' files. Eight lay raters evaluated the fused images using a 100-mm visual analogue scale. Method reliability was assessed by reevaluation of fused images after >1 month. A regression model was used to analyze which elements of facial esthetics influenced the perception of nasolabial appearance. Method reliability was good. A regression analysis demonstrated that only the appearance of the nasolabial area affected the esthetic scores of fused images (coefficient = -11.44; P < .001; R(2) = 0.464). The appearance of the external facial features did not influence perceptions of fused images. Cropping facial images for assessment of nasolabial appearance in CLP seems unnecessary. Instead, esthetic evaluation can be performed on images of full faces.

  18. Facial color processing in the face-selective regions: an fMRI study.

    PubMed

    Nakajima, Kae; Minami, Tetsuto; Tanabe, Hiroki C; Sadato, Norihiro; Nakauchi, Shigeki

    2014-09-01

    Facial color is important information for social communication as it provides important clues to recognize a person's emotion and health condition. Our previous EEG study suggested that N170 at the left occipito-temporal site is related to facial color processing (Nakajima et al., [2012]: Neuropsychologia 50:2499-2505). However, because of the low spatial resolution of EEG experiment, the brain region is involved in facial color processing remains controversial. In the present study, we examined the neural substrates of facial color processing using functional magnetic resonance imaging (fMRI). We measured brain activity from 25 subjects during the presentation of natural- and bluish-colored face and their scrambled images. The bilateral fusiform face (FFA) area and occipital face area (OFA) were localized by the contrast of natural-colored faces versus natural-colored scrambled images. Moreover, region of interest (ROI) analysis showed that the left FFA was sensitive to facial color, whereas the right FFA and the right and left OFA were insensitive to facial color. In combination with our previous EEG results, these data suggest that the left FFA may play an important role in facial color processing. Copyright © 2014 Wiley Periodicals, Inc.

  19. Effective Heart Disease Detection Based on Quantitative Computerized Traditional Chinese Medicine Using Representation Based Classifiers.

    PubMed

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    At present, heart disease is the number one cause of death worldwide. Traditionally, heart disease is commonly detected using blood tests, electrocardiogram, cardiac computerized tomography scan, cardiac magnetic resonance imaging, and so on. However, these traditional diagnostic methods are time consuming and/or invasive. In this paper, we propose an effective noninvasive computerized method based on facial images to quantitatively detect heart disease. Specifically, facial key block color features are extracted from facial images and analyzed using the Probabilistic Collaborative Representation Based Classifier. The idea of facial key block color analysis is founded in Traditional Chinese Medicine. A new dataset consisting of 581 heart disease and 581 healthy samples was experimented by the proposed method. In order to optimize the Probabilistic Collaborative Representation Based Classifier, an analysis of its parameters was performed. According to the experimental results, the proposed method obtains the highest accuracy compared with other classifiers and is proven to be effective at heart disease detection.

  20. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study

    PubMed Central

    Shen, Hui; Chau, Desmond K. P.; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-01-01

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions. PMID:27779211

  1. Brain responses to facial attractiveness induced by facial proportions: evidence from an fMRI study.

    PubMed

    Shen, Hui; Chau, Desmond K P; Su, Jianpo; Zeng, Ling-Li; Jiang, Weixiong; He, Jufang; Fan, Jintu; Hu, Dewen

    2016-10-25

    Brain responses to facial attractiveness induced by facial proportions are investigated by using functional magnetic resonance imaging (fMRI), in 41 young adults (22 males and 19 females). The subjects underwent fMRI while they were presented with computer-generated, yet realistic face images, which had varying facial proportions, but the same neutral facial expression, baldhead and skin tone, as stimuli. Statistical parametric mapping with parametric modulation was used to explore the brain regions with the response modulated by facial attractiveness ratings (ARs). The results showed significant linear effects of the ARs in the caudate nucleus and the orbitofrontal cortex for all of the subjects, and a non-linear response profile in the right amygdala for only the male subjects. Furthermore, canonical correlation analysis was used to learn the most relevant facial ratios that were best correlated with facial attractiveness. A regression model on the fMRI-derived facial ratio components demonstrated a strong linear relationship between the visually assessed mean ARs and the predictive ARs. Overall, this study provided, for the first time, direct neurophysiologic evidence of the effects of facial ratios on facial attractiveness and suggested that there are notable gender differences in perceiving facial attractiveness as induced by facial proportions.

  2. A Multivariate Analysis of Unilateral Cleft Lip and Palate Facial Skeletal Morphology.

    PubMed

    Starbuck, John M; Ghoneima, Ahmed; Kula, Katherine

    2015-07-01

    Unilateral cleft lip and palate (UCLP) occurs when the maxillary and nasal facial prominences fail to fuse correctly during development, resulting in a palatal cleft and clefted soft and hard tissues of the dentoalveolus. The UCLP deformity may compromise an individual's ability to eat, chew, and speak. In this retrospective cross-sectional study, cone beam computed tomography (CBCT) images of 7-17-year-old individuals born with UCLP (n = 24) and age- and sex-matched controls (n = 24) were assessed. Coordinate values of three-dimensional anatomical landmarks (n = 32) were recorded from each CBCT image. Data were evaluated using principal coordinates analysis (PCOORD) and Euclidean distance matrix analysis (EDMA). Approximately 40% of morphometric variation is captured by PCOORD axes 1-3, and the negative and positive ends of each axis are associated with specific patterns of morphological differences. Approximately 36% of facial skeletal measures significantly differ by confidence interval testing (α = 0.10) between samples. Although significant form differences occur across the facial skeleton, strong patterns of morphological differences were localized to the lateral and superioinferior aspects of the nasal aperture, particularly on the clefted side of the face. The UCLP deformity strongly influences facial skeletal morphology of the midface and oronasal facial regions, and to a lesser extent the upper and lower facial skeletons. The pattern of strong morphological differences in the oronasal region combined with differences across the facial complex suggests that craniofacial bones are integrated and covary, despite influences from the congenital cleft.

  3. Imitating expressions: emotion-specific neural substrates in facial mimicry.

    PubMed

    Lee, Tien-Wen; Josephs, Oliver; Dolan, Raymond J; Critchley, Hugo D

    2006-09-01

    Intentionally adopting a discrete emotional facial expression can modulate the subjective feelings corresponding to that emotion; however, the underlying neural mechanism is poorly understood. We therefore used functional brain imaging (functional magnetic resonance imaging) to examine brain activity during intentional mimicry of emotional and non-emotional facial expressions and relate regional responses to the magnitude of expression-induced facial movement. Eighteen healthy subjects were scanned while imitating video clips depicting three emotional (sad, angry, happy), and two 'ingestive' (chewing and licking) facial expressions. Simultaneously, facial movement was monitored from displacement of fiducial markers (highly reflective dots) on each subject's face. Imitating emotional expressions enhanced activity within right inferior prefrontal cortex. This pattern was absent during passive viewing conditions. Moreover, the magnitude of facial movement during emotion-imitation predicted responses within right insula and motor/premotor cortices. Enhanced activity in ventromedial prefrontal cortex and frontal pole was observed during imitation of anger, in ventromedial prefrontal and rostral anterior cingulate during imitation of sadness and in striatal, amygdala and occipitotemporal during imitation of happiness. Our findings suggest a central role for right inferior frontal gyrus in the intentional imitation of emotional expressions. Further, by entering metrics for facial muscular change into analysis of brain imaging data, we highlight shared and discrete neural substrates supporting affective, action and social consequences of somatomotor emotional expression.

  4. Facial morphometry of Ecuadorian patients with growth hormone receptor deficiency/Laron syndrome.

    PubMed Central

    Schaefer, G B; Rosenbloom, A L; Guevara-Aguirre, J; Campbell, E A; Ullrich, F; Patil, K; Frias, J L

    1994-01-01

    Facial morphometry using computerised image analysis was performed on patients with growth hormone receptor deficiency (Laron syndrome) from an inbred population of southern Ecuador. Morphometrics were compared for 49 patients, 70 unaffected relatives, and 14 unrelated persons. Patients with growth hormone receptor deficiency showed significant decreases in measures of vertical facial growth as compared to unaffected relatives and unrelated persons with short stature from other causes. This report validates and quantifies the clinical impression of foreshortened facies in growth hormone receptor deficiency. Images PMID:7815422

  5. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion.

    PubMed

    Guo, Kun; Soornack, Yoshi; Settle, Rebecca

    2018-03-05

    Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    PubMed Central

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  7. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Effects of glycolic acid chemical peeling on facial pigment deposition: evaluation using novel computer analysis of digital-camera-captured images.

    PubMed

    Kakudo, Natsuko; Kushida, Satoshi; Suzuki, Kenji; Kusumoto, Kenji

    2013-12-01

    Chemical peeling is becoming increasingly popular for skin rejuvenation in dermatological cosmetic medicine. However, the improvements seen with chemical peeling are often very minor, and it is difficult to conduct a quantitative assessment of pre- and post-treatment appearance. We report the pre- and postpeeling effects for facial pigment deposition using a novel computer analysis method for digital-camera-captured images. Glycolic acid chemical peeling was performed a total of 5 times at 2-week intervals in 23 healthy women. We conducted a computer image analysis by utilizing Robo Skin Analyzer CS 50 and Clinical Suite 2.1 and then reviewed each parameter for the area of facial pigment deposition pre- and post-treatment. Parameters were pigmentation size and four pigmentation categories: little pigmentation and three levels of marked pigmentation (Lv1, 2, and 3) based on detection threshold. Each parameter was measured, and the total area of facial pigmentation was calculated. The total area of little pigmentation and marked pigmentation (Lv1) was significantly reduced. On the other hand, a significant difference was not observed for the total area of marked pigmentation Lv2 and Lv3. This suggests that glycolic acid chemical peeling has an effect on small facial pigment disposition or has an effect on light pigment deposition. As the Robo Skin Analyzer is useful for objectively quantifying and analyzing minor changes in facial skin, it is considered to be an effective tool for accumulating treatment evidence in the cosmetic and esthetic skin field. © 2013 Wiley Periodicals, Inc.

  9. A Neural Basis of Facial Action Recognition in Humans

    PubMed Central

    Srinivasan, Ramprakash; Golomb, Julie D.

    2016-01-01

    By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688

  10. Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.

    PubMed

    Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C

    2011-03-01

    Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.

  11. A Method of Face Detection with Bayesian Probability

    NASA Astrophysics Data System (ADS)

    Sarker, Goutam

    2010-10-01

    The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.

  12. The assessment of facial variation in 4747 British school children.

    PubMed

    Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen

    2012-12-01

    The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.

  13. Association of Frontal and Lateral Facial Attractiveness.

    PubMed

    Gu, Jeffrey T; Avilla, David; Devcic, Zlatko; Karimi, Koohyar; Wong, Brian J F

    2018-01-01

    Despite the large number of studies focused on defining frontal or lateral facial attractiveness, no reports have examined whether a significant association between frontal and lateral facial attractiveness exists. To examine the association between frontal and lateral facial attractiveness and to identify anatomical features that may influence discordance between frontal and lateral facial beauty. Paired frontal and lateral facial synthetic images of 240 white women (age range, 18-25 years) were evaluated from September 30, 2004, to September 29, 2008, using an internet-based focus group (n = 600) on an attractiveness Likert scale of 1 to 10, with 1 being least attractive and 10 being most attractive. Data analysis was performed from December 6, 2016, to March 30, 2017. The association between frontal and lateral attractiveness scores was determined using linear regression. Outliers were defined as data outside the 95% individual prediction interval. To identify features that contribute to score discordance between frontal and lateral attractiveness scores, each of these image pairs were scrutinized by an evaluator panel for facial features that were present in the frontal or lateral projections and absent in the other respective facial projections. Attractiveness scores obtained from internet-based focus groups. For the 240 white women studied (mean [SD] age, 21.4 [2.2] years), attractiveness scores ranged from 3.4 to 9.5 for frontal images and 3.3 to 9.4 for lateral images. The mean (SD) frontal attractiveness score was 6.9 (1.4), whereas the mean (SD) lateral attractiveness score was 6.4 (1.3). Simple linear regression of frontal and lateral attractiveness scores resulted in a coefficient of determination of r2 = 0.749. Eight outlier pairs were identified and analyzed by panel evaluation. Panel evaluation revealed no clinically applicable association between frontal and lateral images among outliers; however, contributory facial features were suggested. Thin upper lip, convex nose, and blunt cervicomental angle were suggested by evaluators as facial characteristics that contributed to outlier frontal or lateral attractiveness scores. This study identified a strong linear association between frontal and lateral facial attractiveness. Furthermore, specific facial landmarks responsible for the discordance between frontal and lateral facial attractiveness scores were suggested. Additional studies are necessary to determine whether correction of these landmarks may increase facial harmony and attractiveness. NA.

  14. Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users

    NASA Astrophysics Data System (ADS)

    Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi

    This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.

  15. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging.

    PubMed

    Hsu, Vivian M; Wes, Ari M; Tahiri, Youssef; Cornman-Homonoff, Joshua; Percec, Ivona

    2014-09-01

    The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time.

  16. Feature selection from a facial image for distinction of sasang constitution.

    PubMed

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho

    2009-09-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  17. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    PubMed Central

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun

    2009-01-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013

  18. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Correcting the planar perspective projection in geometric structures applied to forensic facial analysis.

    PubMed

    Baldasso, Rosane Pérez; Tinoco, Rachel Lima Ribeiro; Vieira, Cristina Saft Matos; Fernandes, Mário Marques; Oliveira, Rogério Nogueira

    2016-10-01

    The process of forensic facial analysis may be founded on several scientific techniques and imaging modalities, such as digital signal processing, photogrammetry and craniofacial anthropometry. However, one of the main limitations in this analysis is the comparison of images acquired with different angles of incidence. The present study aimed to explore a potential approach for the correction of the planar perspective projection (PPP) in geometric structures traced from the human face. A technique for the correction of the PPP was calibrated within photographs of two geometric structures obtained with angles of incidence distorted in 80°, 60° and 45°. The technique was performed using ImageJ ® 1.46r (National Institutes of Health, Bethesda, Maryland). The corrected images were compared with photographs of the same object obtained in 90° (reference). In a second step, the technique was validated in a digital human face created using MakeHuman ® 1.0.2 (Free Software Foundation, Massachusetts, EUA) and Blender ® 2.75 (Blender ® Foundation, Amsterdam, Nederland) software packages. The images registered with angular distortion presented a gradual decrease in height when compared to the reference. The digital technique for the correction of the PPP is a valuable tool for forensic applications using photographic imaging modalities, such as forensic facial analysis. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Sequential Change of Wound Calculated by Image Analysis Using a Color Patch Method during a Secondary Intention Healing.

    PubMed

    Yang, Sejung; Park, Junhee; Lee, Hanuel; Kim, Soohyun; Lee, Byung-Uk; Chung, Kee-Yang; Oh, Byungho

    2016-01-01

    Photographs of skin wounds have the most important information during the secondary intention healing (SIH). However, there is no standard method for handling those images and analyzing them efficiently and conveniently. To investigate the sequential changes of SIH depending on the body sites using a color patch method. We performed retrospective reviews of 30 patients (11 facial and 19 non-facial areas) who underwent SIH for the restoration of skin defects and captured sequential photographs with a color patch which is specially designed for automatically calculating defect and scar sizes. Using a novel image analysis method with a color patch, skin defects were calculated more accurately (range of error rate: -3.39% ~ + 3.05%). All patients had smaller scar size than the original defect size after SIH treatment (rates of decrease: 18.8% ~ 86.1%), and facial area showed significantly higher decrease rate compared with the non-facial area such as scalp and extremities (67.05 ± 12.48 vs. 53.29 ± 18.11, P < 0.05). From the result of estimating the date corresponding to the half of the final decrement, all of the facial area showed improvements within two weeks (8.45 ± 3.91), and non-facial area needed 14.33 ± 9.78 days. From the results of sequential changes of skin defects, SIH can be recommended as an alternative treatment method for restoration with more careful dressing for initial two weeks.

  1. Video analysis of the biomechanics of a bicycle accident resulting in significant facial fractures.

    PubMed

    Syed, Shameer H; Willing, Ryan; Jenkyn, Thomas R; Yazdani, Arjang

    2013-11-01

    This study aimed to use video analysis techniques to determine the velocity, impact force, angle of impact, and impulse to fracture involved in a video-recorded bicycle accident resulting in facial fractures. Computed tomographic images of the resulting facial injury are presented for correlation with data and calculations. To our knowledge, such an analysis of an actual recorded trauma has not been reported in the literature. A video recording of the accident was split into frames and analyzed using an image editing program. Measurements of velocity and angle of impact were obtained from this analysis, and the force of impact and impulse were calculated using the inverse dynamic method with connected rigid body segments. These results were then correlated with the actual fracture pattern found on computed tomographic imaging of the subject's face. There was an impact velocity of 6.25 m/s, impact angles of 14 and 6.3 degrees of neck extension and axial rotation, respectively, an impact force of 1910.4 N, and an impulse to fracture of 47.8 Ns. These physical parameters resulted in clinically significant bilateral mid-facial Le Fort II and III pattern fractures. These data confer further understanding of the biomechanics of bicycle-related accidents by correlating an actual clinical outcome with the kinematic and dynamic parameters involved in the accident itself and yielding a concrete evidence of the velocity, force, and impulse necessary to cause clinically significant facial trauma. These findings can aid in the design of protective equipment for bicycle riders to help avoid this type of injury.

  2. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  3. Automated Video Based Facial Expression Analysis of Neuropsychiatric Disorders

    PubMed Central

    Wang, Peng; Barrett, Frederick; Martin, Elizabeth; Milanova, Marina; Gur, Raquel E.; Gur, Ruben C.; Kohler, Christian; Verma, Ragini

    2008-01-01

    Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger’s syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits. PMID:18045693

  4. Imaging the Facial Nerve: A Contemporary Review

    PubMed Central

    Gupta, Sachin; Mends, Francine; Hagiwara, Mari; Fatterpekar, Girish; Roehm, Pamela C.

    2013-01-01

    Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell's palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers. PMID:23766904

  5. Three-dimensional analysis of facial shape and symmetry in twins using laser surface scanning.

    PubMed

    Djordjevic, J; Jadallah, M; Zhurov, A I; Toma, A M; Richmond, S

    2013-08-01

    Three-dimensional analysis of facial shape and symmetry in twins. Faces of 37 twin pairs [19 monozygotic (MZ) and 18 dizygotic (DZ)] were laser scanned at the age of 15 during a follow-up of the Avon Longitudinal Study of Parents and Children (ALSPAC), South West of England. Facial shape was analysed using two methods: 1) Procrustes analysis of landmark configurations (63 x, y and z coordinates of 21 facial landmarks) and 2) three-dimensional comparisons of facial surfaces within each twin pair. Monozygotic and DZ twins were compared using ellipsoids representing 95% of the variation in landmark configurations and surface-based average faces. Facial symmetry was analysed by superimposing the original and mirror facial images. Both analyses showed greater similarity of facial shape in MZ twins, with lower third being the least similar. Procrustes analysis did not reveal any significant difference in facial landmark configurations of MZ and DZ twins. The average faces of MZ and DZ males were coincident in the forehead, supraorbital and infraorbital ridges, the bridge of the nose and lower lip. In MZ and DZ females, the eyes, supraorbital and infraorbital ridges, philtrum and lower part of the cheeks were coincident. Zygosity did not seem to influence the amount of facial symmetry. Lower facial third was the most asymmetrical. Three-dimensional analyses revealed differences in facial shapes of MZ and DZ twins. The relative contribution of genetic and environmental factors is different for the upper, middle and lower facial thirds. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features.

    PubMed

    Ding, Liya; Martinez, Aleix M

    2010-11-01

    The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.

  7. Role of facial attractiveness in patients with slight-to-borderline treatment need according to the Aesthetic Component of the Index of Orthodontic Treatment Need as judged by eye tracking.

    PubMed

    Johnson, Elizabeth K; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2017-02-01

    Previous eye-tracking research has demonstrated that laypersons view the range of dental attractiveness levels differently depending on facial attractiveness levels. How the borderline levels of dental attractiveness are viewed has not been evaluated in the context of facial attractiveness and compared with those with near-ideal esthetics or those in definite need of orthodontic treatment according to the Aesthetic Component of the Index of Orthodontic Treatment Need scale. Our objective was to determine the level of viewers' visual attention in its treatment need categories levels 3 to 7 for persons considered "attractive," "average," or "unattractive." Facial images of persons at 3 facial attractiveness levels were combined with 5 levels of dental attractiveness (dentitions representing Aesthetic Component of the Index of Orthodontic Treatment Need levels 3-7) using imaging software to form 15 composite images. Each image was viewed twice by 66 lay participants using eye tracking. Both the fixation density (number of fixations per facial area) and the fixation duration (length of time for each facial area) were quantified for each image viewed. Repeated-measures analysis of variance was used to determine how fixation density and duration varied among the 6 facial interest areas (chin, ear, eye, mouth, nose, and other). Viewers demonstrated excellent to good reliability among the 6 interest areas (intraviewer reliability, 0.70-0.96; interviewer reliability, 0.56-0.93). Between Aesthetic Component of the Index of Orthodontic Treatment Need levels 3 and 7, viewers of all facial attractiveness levels showed an increase in attention to the mouth. However, only with the attractive models were significant differences in fixation density and duration found between borderline levels with female viewers. Female viewers paid attention to different areas of the face than did male viewers. The importance of dental attractiveness is amplified in facially attractive female models compared with average and unattractive female models between near-ideal and borderline-severe dentally unattractive levels. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  8. Automated Facial Recognition of Computed Tomography-Derived Facial Images: Patient Privacy Implications.

    PubMed

    Parks, Connie L; Monson, Keith L

    2017-04-01

    The recognizability of facial images extracted from publically available medical scans raises patient privacy concerns. This study examined how accurately facial images extracted from computed tomography (CT) scans are objectively matched with corresponding photographs of the scanned individuals. The test subjects were 128 adult Americans ranging in age from 18 to 60 years, representing both sexes and three self-identified population (ancestral descent) groups (African, European, and Hispanic). Using facial recognition software, the 2D images of the extracted facial models were compared for matches against five differently sized photo galleries. Depending on the scanning protocol and gallery size, in 6-61 % of the cases, a correct life photo match for a CT-derived facial image was the top ranked image in the generated candidate lists, even when blind searching in excess of 100,000 images. In 31-91 % of the cases, a correct match was located within the top 50 images. Few significant differences (p > 0.05) in match rates were observed between the sexes or across the three age cohorts. Highly significant differences (p < 0.01) were, however, observed across the three ancestral cohorts and between the two CT scanning protocols. Results suggest that the probability of a match between a facial image extracted from a medical scan and a photograph of the individual is moderately high. The facial image data inherent in commonly employed medical imaging modalities may need to consider a potentially identifiable form of "comparable" facial imagery and protected as such under patient privacy legislation.

  9. Quantified Facial Soft-tissue Strain in Animation Measured by Real-time Dynamic 3-Dimensional Imaging

    PubMed Central

    Hsu, Vivian M.; Wes, Ari M.; Tahiri, Youssef; Cornman-Homonoff, Joshua

    2014-01-01

    Background: The aim of this study is to evaluate and quantify dynamic soft-tissue strain in the human face using real-time 3-dimensional imaging technology. Methods: Thirteen subjects (8 women, 5 men) between the ages of 18 and 70 were imaged using a dual-camera system and 3-dimensional optical analysis (ARAMIS, Trilion Quality Systems, Pa.). Each subject was imaged at rest and with the following facial expressions: (1) smile, (2) laughter, (3) surprise, (4) anger, (5) grimace, and (6) pursed lips. The facial strains defining stretch and compression were computed for each subject and compared. Results: The areas of greatest strain were localized to the midface and lower face for all expressions. Subjects over the age of 40 had a statistically significant increase in stretch in the perioral region while lip pursing compared with subjects under the age of 40 (58.4% vs 33.8%, P = 0.015). When specific components of lip pursing were analyzed, there was a significantly greater degree of stretch in the nasolabial fold region in subjects over 40 compared with those under 40 (61.6% vs 32.9%, P = 0.007). Furthermore, we observed a greater degree of asymmetry of strain in the nasolabial fold region in the older age group (18.4% vs 5.4%, P = 0.03). Conclusions: This pilot study illustrates that the face can be objectively and quantitatively evaluated using dynamic major strain analysis. The technology of 3-dimensional optical imaging can be used to advance our understanding of facial soft-tissue dynamics and the effects of animation on facial strain over time. PMID:25426394

  10. Facial recognition in education system

    NASA Astrophysics Data System (ADS)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  11. Clinical significance of quantitative analysis of facial nerve enhancement on MRI in Bell's palsy.

    PubMed

    Song, Mee Hyun; Kim, Jinna; Jeon, Ju Hyun; Cho, Chang Il; Yoo, Eun Hye; Lee, Won-Sang; Lee, Ho-Ki

    2008-11-01

    Quantitative analysis of the facial nerve on the lesion side as well as the normal side, which allowed for more accurate measurement of facial nerve enhancement in patients with facial palsy, showed statistically significant correlation with the initial severity of facial nerve inflammation, although little prognostic significance was shown. This study investigated the clinical significance of quantitative measurement of facial nerve enhancement in patients with Bell's palsy by analyzing the enhancement pattern and correlating MRI findings with initial severity of facial palsy and clinical outcome. Facial nerve enhancement was measured quantitatively by using the region of interest on pre- and postcontrast T1-weighted images in 44 patients diagnosed with Bell's palsy. The signal intensity increase on the lesion side was first compared with that of the contralateral side and then correlated with the initial degree of facial palsy and prognosis. The lesion side showed significantly higher signal intensity increase compared with the normal side in all of the segments except for the mastoid segment. Signal intensity increase at the internal auditory canal and labyrinthine segments showed correlation with the initial degree of facial palsy but no significant difference was found between different prognostic groups.

  12. Down syndrome detection from facial photographs using machine learning techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George

    2013-02-01

    Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.

  13. Retrospective single center study of the efficacy of large spot 532 nm laser for the treatment of facial capillary malformations in 44 patients with the use of three-dimensional image analysis.

    PubMed

    Kwiek, Bartłomiej; Rożalski, Michał; Kowalewski, Cezary; Ambroziak, Marcin

    2017-10-01

    We wanted to asses the efficacy of large spot 532 nm laser for the treatment of facial capillary malformations with the use of three-dimensional (3D) image analysis. Retrospective single center study on previously non-treated patients with facial capillary malformations (CM) was performed. A total of 44 consecutive Caucasian patients aged 5-66 were included. Patients had 3D photography performed before and after and had at least one single session of treatment with 532 nm neodymium-doped yttrium aluminum garnet (Nd:YAG) laser with contact cooling, fluencies ranging from 8 to 11.5 J/cm 2 , pulse duration ranging from 5 to 9 milliseconds and spot size ranging from 5 to 10 mm. Objective analysis of percentage improvement based on 3D digital assessment of combined color and area improvement (global clearance effect [GCE]) were performed. Median maximal improvement achieved during the treatment (GCE max ) was 70.4%. Mean number of laser procedures required to achieve this improvement was 7.1 (ranging from 2 to 14)). Improvement of minimum 25% (GCE 25) was achieved by all patients, of minimum 50% (GCE 50) by 77.3%, of minimum 75% (GCE 75) by 38.6%, and of minimum 90% (GCE 90) by 13.64. Large spot 532 nm laser is highly effective in the treatment of facial CM. 3D color and area image analysis provides an objective method to compare different methods of facial CM treatment in future studies. Lasers Surg. Med. 49:743-749, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Lifestyle Factors and Visible Skin Aging in a Population of Japanese Elders

    PubMed Central

    Asakura, Keiko; Nishiwaki, Yuji; Milojevic, Ai; Michikawa, Takehiro; Kikuchi, Yuriko; Nakano, Makiko; Iwasawa, Satoko; Hillebrand, Greg; Miyamoto, Kukizo; Ono, Masaji; Kinjo, Yoshihide; Akiba, Suminori; Takebayashi, Toru

    2009-01-01

    Background The number of studies that use objective and quantitative methods to evaluate facial skin aging in elderly people is extremely limited, especially in Japan. Therefore, in this cross-sectional study we attempted to characterize the condition of facial skin (hyperpigmentation, pores, texture, and wrinkling) in Japanese adults aged 65 years or older by using objective and quantitative imaging methods. In addition, we aimed to identify lifestyle factors significantly associated with these visible signs of aging. Methods The study subjects were 802 community-dwelling Japanese men and women aged at least 65 years and living in the town of Kurabuchi (Takasaki City, Gunma Prefecture, Japan), a mountain community with a population of approximately 4800. The facial skin condition of subjects was assessed quantitatively using a standardized facial imaging system and subsequent computer image analysis. Lifestyle information was collected using a structured questionnaire. The association between skin condition and lifestyle factors was examined using multivariable regression analysis. Results Among women, the mean values for facial texture, hyperpigmentation, and pores were generally lower than those among age-matched men. There was no significant difference between sexes in the severity of facial wrinkling. Older age was associated with worse skin condition among women only. After adjusting for age, smoking status and topical sun protection were significantly associated with skin condition among both men and women. Conclusions Our study revealed significant differences between sexes in the severity of hyperpigmentation, texture, and pores, but not wrinkling. Smoking status and topical sun protection were significantly associated with signs of visible skin aging in this study population. PMID:19700917

  15. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  16. Photogrammetric Analysis of Attractiveness in Indian Faces

    PubMed Central

    Duggal, Shveta; Kapoor, DN; Verma, Santosh; Sagar, Mahesh; Lee, Yung-Seop; Moon, Hyoungjin

    2016-01-01

    Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population. PMID:27019809

  17. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  18. Case analysis of temporal bone lesions with facial paralysis as main manifestation and literature review.

    PubMed

    Chen, Wen-Jing; Ye, Jing-Ying; Li, Xin; Xu, Jia; Yi, Hai-Jin

    2017-08-23

    This study aims to discuss clinical characteristics, image manifestation and treatment methods of temporal bone lesions with facial paralysis as the main manifestation for deepening the understanding of such type of lesions and reducing erroneous and missed diagnosis. The clinical data of 16 patients with temporal bone lesions and facial paralysis as main manifestation, who were diagnosed and treated from 2009 to 2016, were retrospectively analyzed. Among these patients, six patients had congenital petrous bone cholesteatoma (PBC), nine patients had facial nerve schwannoma, and one patient had facial nerve hemangioma. All the patients had an experience of long-term erroneous diagnosis. The lesions were completely excised by surgery. PBC and primary facial nerve tumors were pathologically confirmed. Facial-hypoglossal nerve anastomosis was performed on two patients. HB grade VI was recovered to HB grade V in one patient. The anastomosis failed due to severe facial nerve fibrosis in one patient. Hence, HB remained at grade VI. Postoperative recovery was good for all patients. No lesion recurrence was observed after 1-6 years of follow-up. For the patients with progressive or complete facial paralysis, imaging examination should be perfected in a timely manner. Furthermore, PBC, primary facial nerve tumors and other temporal bone space-occupying lesions should be eliminated. Lesions should be timely detected and proper intervention should be conducted, in order to reduce operation difficulty and complications, and increase the opportunity of facial nerve function reconstruction.

  19. Non-invasive health status detection system using Gabor filters based on facial block texture features.

    PubMed

    Shu, Ting; Zhang, Bob

    2015-04-01

    Blood tests allow doctors to check for certain diseases and conditions. However, using a syringe to extract the blood can be deemed invasive, slightly painful, and its analysis time consuming. In this paper, we propose a new non-invasive system to detect the health status (Healthy or Diseased) of an individual based on facial block texture features extracted using the Gabor filter. Our system first uses a non-invasive capture device to collect facial images. Next, four facial blocks are located on these images to represent them. Afterwards, each facial block is convolved with a Gabor filter bank to calculate its texture value. Classification is finally performed using K-Nearest Neighbor and Support Vector Machines via a Library for Support Vector Machines (with four kernel functions). The system was tested on a dataset consisting of 100 Healthy and 100 Diseased (with 13 forms of illnesses) samples. Experimental results show that the proposed system can detect the health status with an accuracy of 93 %, a sensitivity of 94 %, a specificity of 92 %, using a combination of the Gabor filters and facial blocks.

  20. Cortical representation of facial and tongue movements: a task functional magnetic resonance imaging study.

    PubMed

    Xiao, Fu-Long; Gao, Pei-Yi; Qian, Tian-Yi; Sui, Bin-Bin; Xue, Jing; Zhou, Jian; Lin, Yan

    2017-05-01

    Functional magnetic resonance imaging (fMRI) mapping can present the activated cortical area during movement, while little is known about precise location in facial and tongue movements. To investigate the representation of facial and tongue movements by task fMRI. Twenty right-handed healthy subjects were underwent block design task fMRI examination. Task movements included lip pursing, cheek bulging, grinning and vertical tongue excursion. Statistical parametric mapping (SPM8) was applied to analysis the data. One-sample t-test was used to calculate the common activation area between facial and tongue movements. Also, paired t-test was used to test for areas of over- or underactivation in tongue movement compared with each group of facial movements. The common areas within facial and tongue movements suggested the similar motor circuits of activation in both movements. Prior activation in tongue movement was situated laterally and inferiorly in sensorimotor area relative to facial movements. Prior activation of tongue movement was investigated in left superior parietal lobe relative to lip pursing. Also, prior activation in bilateral cuneus lobe in grinning compared with tongue movement was detected. © 2015 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  1. Segmentation of human face using gradient-based approach

    NASA Astrophysics Data System (ADS)

    Baskan, Selin; Bulut, M. Mete; Atalay, Volkan

    2001-04-01

    This paper describes a method for automatic segmentation of facial features such as eyebrows, eyes, nose, mouth and ears in color images. This work is an initial step for wide range of applications based on feature-based approaches, such as face recognition, lip-reading, gender estimation, facial expression analysis, etc. Human face can be characterized by its skin color and nearly elliptical shape. For this purpose, face detection is performed using color and shape information. Uniform illumination is assumed. No restrictions on glasses, make-up, beard, etc. are imposed. Facial features are extracted using the vertically and horizontally oriented gradient projections. The gradient of a minimum with respect to its neighbor maxima gives the boundaries of a facial feature. Each facial feature has a different horizontal characteristic. These characteristics are derived by extensive experimentation with many face images. Using fuzzy set theory, the similarity between the candidate and the feature characteristic under consideration is calculated. Gradient-based method is accompanied by the anthropometrical information, for robustness. Ear detection is performed using contour-based shape descriptors. This method detects the facial features and circumscribes each facial feature with the smallest rectangle possible. AR database is used for testing. The developed method is also suitable for real-time systems.

  2. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    NASA Astrophysics Data System (ADS)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  3. Facial biometrics of peri-oral changes in Crohn's disease.

    PubMed

    Zou, L; Adegun, O K; Willis, A; Fortune, Farida

    2014-05-01

    Crohn's disease is a chronic relapsing and remitting inflammatory condition which affects any part of the gastrointestinal tract. In the oro-facial region, patients can present peri-oral swellings which results in severe facial disfigurement. To date, assessing the degree of facial changes and evaluation of treatment outcomes relies on clinical observation and semi-quantitative methods. In this paper, we describe the development of a robust and reproducible measurement strategy using 3-D facial biometrics to objectively quantify the extent and progression of oro-facial Crohn's disease. Using facial laser scanning, 32 serial images from 13 Crohn's patients attending the Oral Medicine clinic were acquired during relapse, remission, and post-treatment phases. Utilising theories of coordinate metrology, the facial images were subjected to registration, regions of interest identification, and reproducible repositioning prior to obtaining volume measurements. To quantify the changes in tissue volume, scan images from consecutive appointments were compared to the baseline (first scan image). Reproducibility test was performed to ascertain the degree of uncertainty in volume measurements. 3-D facial biometric imaging is a reliable method to identify and quantify peri-oral swelling in Crohn's patients. Comparison of facial scan images at different phases of the disease revealed precisely profile and volume changes. The volume measurements were highly reproducible as adjudged from the 1% standard deviation. 3-D facial biometrics measurements in Crohn's patients with oro-facial involvement offers a quick, robust, economical and objective approach for guided therapeutic intervention and routine assessment of treatment efficacy on the clinic.

  4. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  5. Forensic facial comparison in South Africa: State of the science.

    PubMed

    Steyn, M; Pretorius, M; Briers, N; Bacci, N; Johnson, A; Houlton, T M R

    2018-06-01

    Forensic facial comparison (FFC) is a scientific technique used to link suspects to a crime scene based on the analysis of photos or video recordings from that scene. While basic guidelines on practice and training are provided by the Facial Identification Scientific Working Group, details of how these are applied across the world are scarce. FFC is frequently used in South Africa, with more than 700 comparisons conducted in the last two years alone. In this paper the standards of practice are outlined, with new proposed levels of agreement/conclusions. We outline three levels of training that were established, with training in facial anatomy, terminology, principles of image comparison, image science, facial recognition and computer skills being aimed at developing general competency. Training in generating court charts and understanding court case proceedings are being specifically developed for the South African context. Various shortcomings still exist, specifically with regard to knowledge of the reliability of the technique. These need to be addressed in future research. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  7. Forming Facial Expressions Influences Assessment of Others' Dominance but Not Trustworthiness.

    PubMed

    Ueda, Yoshiyuki; Nagoya, Kie; Yoshikawa, Sakiko; Nomura, Michio

    2017-01-01

    Forming specific facial expressions influences emotions and perception. Bearing this in mind, studies should be reconsidered in which observers expressing neutral emotions inferred personal traits from the facial expressions of others. In the present study, participants were asked to make happy, neutral, and disgusted facial expressions: for "happy," they held a wooden chopstick in their molars to form a smile; for "neutral," they clasped the chopstick between their lips, making no expression; for "disgusted," they put the chopstick between their upper lip and nose and knit their brows in a scowl. However, they were not asked to intentionally change their emotional state. Observers judged happy expression images as more trustworthy, competent, warm, friendly, and distinctive than disgusted expression images, regardless of the observers' own facial expression. Observers judged disgusted expression images as more dominant than happy expression images. However, observers expressing disgust overestimated dominance in observed disgusted expression images and underestimated dominance in happy expression images. In contrast, observers with happy facial forms attenuated dominance for disgusted expression images. These results suggest that dominance inferred from facial expressions is unstable and influenced by not only the observed facial expression, but also the observers' own physiological states.

  8. Investigation into the use of photoanthropometry in facial image comparison.

    PubMed

    Moreton, Reuben; Morley, Johanna

    2011-10-10

    Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Anthropometric Study of Three-Dimensional Facial Morphology in Malay Adults

    PubMed Central

    Majawit, Lynnora Patrick; Mohd Razi, Roziana

    2016-01-01

    Objectives To establish the three-dimensional (3D) facial soft tissue morphology of adult Malaysian subjects of the Malay ethnic group; and to determine the morphological differences between the genders, using a non-invasive stereo-photogrammetry 3D camera. Material and Methods One hundred and nine subjects participated in this research, 54 Malay men and 55 Malay women, aged 20–30 years old with healthy BMI and with no adverse skeletal deviation. Twenty-three facial landmarks were identified on 3D facial images captured using a VECTRA M5-360 Head System (Canfield Scientific Inc, USA). Two angular, 3 ratio and 17 linear measurements were identified using Canfield Mirror imaging software. Intra- and inter-examiner reliability tests were carried out using 10 randomly selected images, analyzed using the intra-class correlation coefficient (ICC). Multivariate analysis of variance (MANOVA) was carried out to investigate morphologic differences between genders. Results ICC scores were generally good for both intra-examiner (range 0.827–0.987) and inter-examiner reliability (range 0.700–0.983) tests. Generally, all facial measurements were larger in men than women, except the facial profile angle which was larger in women. Clinically significant gender dimorphisms existed in biocular width, nose height, nasal bridge length, face height and lower face height values (mean difference > 3mm). Clinical significance was set at 3mm. Conclusion Facial soft tissue morphological values can be gathered efficiently and measured effectively from images captured by a non-invasive stereo-photogrammetry 3D camera. Adult men in Malaysia when compared to women had a wider distance between the eyes, a longer and more prominent nose and a longer face. PMID:27706220

  10. Three-dimensional gender differences in facial form of children in the North East of England.

    PubMed

    Bugaighis, Iman; Mattick, Clare R; Tiddeman, Bernard; Hobson, Ross

    2013-06-01

    The aim of the prospective cross-sectional morphometric study was to explore three dimensional (3D) facial shape and form (shape plus size) variation within and between 8- and 12-year-old Caucasian children; 39 males age-matched with 41 females. The 3D images were captured using a stereophotogrammeteric system, and facial form was recorded by digitizing 39 anthropometric landmarks for each scan. The x, y, z coordinates of each landmark were extracted and used to calculate linear and angular measurements. 3D landmark asymmetry was quantified using Generalized Procrustes Analysis (GPA) and an average face was constructed for each gender. The average faces were superimposed and differences were visualized and quantified. Shape variations were explored using GPA and PrincipalComponent Analysis. Analysis of covariance and Pearson correlation coefficients were used to explore gender differences and to determine any correlation between facial measurements and height or weight. Multivariate analysis was used to ascertain differences in facial measurements or 3D landmark asymmetry. There were no differences in height or weight between genders. There was a significant positive correlation between facial measurements and height and weight and statistically significant differences in linear facial width measurements between genders. These differences were related to the larger size of males rather than differences in shape. There were no age- or gender-linked significant differences in 3D landmark asymmetry. Shape analysis confirmed similarities between both males and females for facial shape and form in 8- to 12-year-old children. Any differences found were related to differences in facial size rather than shape.

  11. Two-dimensional auto-correlation analysis and Fourier-transform analysis of second-harmonic-generation image for quantitative analysis of collagen fiber in human facial skin

    NASA Astrophysics Data System (ADS)

    Ogura, Yuki; Tanaka, Yuji; Hase, Eiji; Yamashita, Toyonobu; Yasui, Takeshi

    2018-02-01

    We compare two-dimensional auto-correlation (2D-AC) analysis and two-dimensional Fourier transform (2D-FT) for evaluation of age-dependent structural change of facial dermal collagen fibers caused by intrinsic aging and extrinsic photo-aging. The age-dependent structural change of collagen fibers for female subjects' cheek skin in their 20s, 40s, and 60s were more noticeably reflected in 2D-AC analysis than in 2D-FT analysis. Furthermore, 2D-AC analysis indicated significantly higher correlation with the skin elasticity measured by Cutometer® than 2D-AC analysis. 2D-AC analysis of SHG image has a high potential for quantitative evaluation of not only age-dependent structural change of collagen fibers but also skin elasticity.

  12. Police witness identification images: a geometric morphometric analysis.

    PubMed

    Hayes, Susan; Tullberg, Cameron

    2012-11-01

    Research into witness identification images typically occurs within the laboratory and involves subjective likeness and recognizability judgments. This study analyzed whether actual witness identification images systematically alter the facial shapes of the suspects described. The shape analysis tool, geometric morphometrics, was applied to 46 homologous facial landmarks displayed on 50 witness identification images and their corresponding arrest photographs, using principal component analysis and multivariate regressions. The results indicate that compared with arrest photographs, witness identification images systematically depict suspects with lowered and medially located eyebrows (p = <0.000001). This was found to occur independently of the Police Artist, and did not occur with composites produced under laboratory conditions. There are several possible explanations for this finding, including any, or all, of the following: The suspect was frowning at the time of the incident, the witness had negative feelings toward the suspect, this is an effect of unfamiliar face processing, the suspect displayed fear at the time of their arrest photograph. © 2012 American Academy of Forensic Sciences.

  13. Estimation of human emotions using thermal facial information

    NASA Astrophysics Data System (ADS)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  14. Preservation of Facial Nerve Function Repaired by Using Fibrin Glue-Coated Collagen Fleece for a Totally Transected Facial Nerve during Vestibular Schwannoma Surgery

    PubMed Central

    Choi, Kyung-Sik; Kim, Min-Su; Jang, Sung-Ho

    2014-01-01

    Recently, the increasing rates of facial nerve preservation after vestibular schwannoma (VS) surgery have been achieved. However, the management of a partially or completely damaged facial nerve remains an important issue. The authors report a patient who was had a good recovery after a facial nerve reconstruction using fibrin glue-coated collagen fleece for a totally transected facial nerve during VS surgery. And, we verifed the anatomical preservation and functional outcome of the facial nerve with postoperative diffusion tensor (DT) imaging facial nerve tractography, electroneurography (ENoG) and House-Brackmann (HB) grade. DT imaging tractography at the 3rd postoperative day revealed preservation of facial nerve. And facial nerve degeneration ratio was 94.1% at 7th postoperative day ENoG. At postoperative 3 months and 1 year follow-up examination with DT imaging facial nerve tractography and ENoG, good results for facial nerve function were observed. PMID:25024825

  15. Hepatitis Diagnosis Using Facial Color Image

    NASA Astrophysics Data System (ADS)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  16. Macroscopic in vivo imaging of facial nerve regeneration in Thy1-GFP rats.

    PubMed

    Placheta, Eva; Wood, Matthew D; Lafontaine, Christine; Frey, Manfred; Gordon, Tessa; Borschel, Gregory H

    2015-01-01

    Facial nerve injury leads to severe functional and aesthetic deficits. The transgenic Thy1-GFP rat is a new model for facial nerve injury and reconstruction research that will help improve clinical outcomes through translational facial nerve injury research. To determine whether serial in vivo imaging of nerve regeneration in the transgenic rat model is possible, facial nerve regeneration was imaged under the main paradigms of facial nerve injury and reconstruction. Fifteen male Thy1-GFP rats, which express green fluorescent protein (GFP) in their neural structures, were divided into 3 groups in the laboratory: crush-injury, direct repair, and cross-face nerve grafting (30-mm graft length). The distal nerve stump or nerve graft was predegenerated for 2 weeks. The facial nerve of the transgenic rats was serially imaged at the time of operation and after 2, 4, and 8 weeks of regeneration. The imaging was performed under a GFP-MDS-96/BN excitation stand (BLS Ltd). Facial nerve injury. Optical fluorescence of regenerating facial nerve axons. Serial in vivo imaging of the regeneration of GFP-positive axons in the Thy1-GFP rat model is possible. All animals survived the short imaging procedures well, and nerve regeneration was followed over clinically relevant distances. The predegeneration of the distal nerve stump or the cross-face nerve graft was, however, necessary to image the regeneration front at early time points. Crush injury was not suitable to sufficiently predegenerate the nerve (and to allow for degradation of the GFP through Wallerian degeneration). After direct repair, axons regenerated over the coaptation site in between 2 and 4 weeks. The GFP-positive nerve fibers reached the distal end of the 30-mm-long cross-face nervegrafts after 4 to 8 weeks of regeneration. The time course of facial nerve regeneration was studied by serial in vivo imaging in the transgenic rat model. Nerve regeneration was followed over clinically relevant distances in a small number of experimental animals, as they were subsequently imaged at multiple time points. The Thy1-GFP rat model will help improve clinical outcomes of facial reanimation surgery through improving the knowledge of facial nerve regeneration after surgical procedures. NA.

  17. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  18. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. [Facial nerve neurinomas].

    PubMed

    Sokołowski, Jacek; Bartoszewicz, Robert; Morawski, Krzysztof; Jamróz, Barbara; Niemczyk, Kazimierz

    2013-01-01

    Evaluation of diagnostic, surgical technique, treatment results facial nerve neurinomas and its comparison with literature was the main purpose of this study. Seven cases of patients (2005-2011) with facial nerve schwannomas were included to retrospective analysis in the Department of Otolaryngology, Medical University of Warsaw. All patients were assessed with history of the disease, physical examination, hearing tests, computed tomography and/or magnetic resonance imaging, electronystagmography. Cases were observed in the direction of potential complications and recurrences. Neurinoma of the facial nerve occurred in the vertical segment (n=2), facial nerve geniculum (n=1) and the internal auditory canal (n=4). The symptoms observed in patients were analyzed: facial nerve paresis (n=3), hearing loss (n=2), dizziness (n=1). Magnetic resonance imaging and computed tomography allowed to confirm the presence of the tumor and to assess its staging. Schwannoma of the facial nerve has been surgically removed using the middle fossa approach (n=5) and by antromastoidectomy (n=2). Anatomical continuity of the facial nerve was achieved in 3 cases. In the twelve months after surgery, facial nerve paresis was rated at level II-III° HB. There was no recurrence of the tumor in radiological observation. Facial nerve neurinoma is a rare tumor. Currently surgical techniques allow in most cases, the radical removing of the lesion and reconstruction of the VII nerve function. The rate of recurrence is low. A tumor of the facial nerve should be considered in the differential diagnosis of nerve VII paresis. Copyright © 2013 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  20. A prospective analysis of physical examination findings in the diagnosis of facial fractures: Determining predictive value.

    PubMed

    Timashpolsky, Alisa; Dagum, Alexander B; Sayeed, Syed M; Romeiser, Jamie L; Rosenfeld, Elisheva A; Conkling, Nicole

    2016-01-01

    There are >150,000 patient visits per year to emergency rooms for facial trauma. The reliability of a computed tomography (CT) scan has made it the primary modality for diagnosing facial skeletal injury, with the physical examination playing more a cursory role. Knowing the predictive value of physical findings in facial skeletal injuries may enable more appropriate use of imaging and health care resources. A blinded prospective study was undertaken to assess the predictive value of physical examination findings in detecting maxillofacial fracture in trauma patients, and in determining whether a patient will require surgical intervention. Over a four-month period, the authors' team examined patients admitted with facial trauma to the emergency department of their hospital. The evaluating physician completed a standardized physical examination evaluation form indicating the physical findings. Corresponding CT scans and surgical records were then reviewed, and the results recorded by a plastic surgeon who was blinded to the results of the physical examination. A total of 57 patients met the inclusion criteria; there were 44 male and 13 female patients. The sensitivity, specificity, positive predictive value and negative predictive value of grouped physical examination findings were determined in major areas. In further analysis, specific examination findings with n≥9 (15%) were also reported. The data demonstrated a high negative predictive value of at least 90% for orbital floor, zygomatic, mandibular and nasal bone fractures compared with CT scan. Furthermore, none of the patients who did not have a physical examination finding for a particular facial fracture required surgery for that fracture. Thus, the instrument performed well at ruling out fractures in these areas when there were none. Ultimately, these results may help reduce unnecessary radiation and costly imaging in patients with facial trauma without facial fractures.

  1. Discrimination of gender using facial image with expression change

    NASA Astrophysics Data System (ADS)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  2. Facial nerve paralysis secondary to occult malignant neoplasms.

    PubMed

    Boahene, Derek O; Olsen, Kerry D; Driscoll, Colin; Lewis, Jean E; McDonald, Thomas J

    2004-04-01

    This study reviewed patients with unilateral facial paralysis and normal clinical and imaging findings who underwent diagnostic facial nerve exploration. Study design and setting Fifteen patients with facial paralysis and normal findings were seen in the Mayo Clinic Department of Otorhinolaryngology. Eleven patients were misdiagnosed as having Bell palsy or idiopathic paralysis. Progressive facial paralysis with sequential involvement of adjacent facial nerve branches occurred in all 15 patients. Seven patients had a history of regional skin squamous cell carcinoma, 13 patients had surgical exploration to rule out a neoplastic process, and 2 patients had negative exploration. At last follow-up, 5 patients were alive. Patients with facial paralysis and normal clinical and imaging findings should be considered for facial nerve exploration when the patient has a history of pain or regional skin cancer, involvement of other cranial nerves, and prolonged facial paralysis. Occult malignancy of the facial nerve may cause unilateral facial paralysis in patients with normal clinical and imaging findings.

  3. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    PubMed Central

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  4. Forensic facial reconstruction: Nasal projection in Brazilian adults.

    PubMed

    Tedeschi-Oliveira, Silvia Virginia; Beaini, Thiago Leite; Melani, Rodolfo Francisco Haltenhoff

    2016-09-01

    The nose has a marked cognitive influence on facial image; however, it loses its shape during cadaveric decomposition. The known methods of estimating nasal projection using Facial Reconstruction are lacking in practicality and reproducibility. We attempted to relate the points Rhinion, Pronasale and Prosthion by studying the angle formed by straight lines that connect them. Two examiners measured this angle with the help of analysis and image-processing software, Image J, directly from cephalometric radiographs. The sample consisted of 300 males, aged between 24 and 77 years, and 300 females, aged 24 to 69 years. The proposed angle ranged from 80° to 100° in both sexes and all ages. It was considered possible to use a 90° angle from projections of the Rhinion and Prosthion points in order to determine the Pronasale position, as well as to estimate the nasal projection of Brazilian adults. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Automatic Contour Extraction of Facial Organs for Frontal Facial Images with Various Facial Expressions

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Suzuki, Seiji; Takahashi, Hisanori; Tange, Akira; Kikuchi, Kohki

    This study deals with a method to realize automatic contour extraction of facial features such as eyebrows, eyes and mouth for the time-wise frontal face with various facial expressions. Because Snakes which is one of the most famous methods used to extract contours, has several disadvantages, we propose a new method to overcome these issues. We define the elastic contour model in order to hold the contour shape and then determine the elastic energy acquired by the amount of modification of the elastic contour model. Also we utilize the image energy obtained by brightness differences of the control points on the elastic contour model. Applying the dynamic programming method, we determine the contour position where the total value of the elastic energy and the image energy becomes minimum. Employing 1/30s time-wise facial frontal images changing from neutral to one of six typical facial expressions obtained from 20 subjects, we have estimated our method and find it enables high accuracy automatic contour extraction of facial features.

  6. Facial identification in very low-resolution images simulating prosthetic vision.

    PubMed

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  7. Looking Like a Leader–Facial Shape Predicts Perceived Height and Leadership Ability

    PubMed Central

    Re, Daniel E.; Hunter, David W.; Coetzee, Vinet; Tiddeman, Bernard P.; Xiao, Dengke; DeBruine, Lisa M.; Jones, Benedict C.; Perrett, David I.

    2013-01-01

    Judgments of leadership ability from face images predict the outcomes of actual political elections and are correlated with leadership success in the corporate world. The specific facial cues that people use to judge leadership remain unclear, however. Physical height is also associated with political and organizational success, raising the possibility that facial cues of height contribute to leadership perceptions. Consequently, we assessed whether cues to height exist in the face and, if so, whether they are associated with perception of leadership ability. We found that facial cues to perceived height had a strong relationship with perceived leadership ability. Furthermore, when allowed to manually manipulate faces, participants increased facial cues associated with perceived height in order to maximize leadership perception. A morphometric analysis of face shape revealed that structural facial masculinity was not responsible for the relationship between perceived height and perceived leadership ability. Given the prominence of facial appearance in making social judgments, facial cues to perceived height may have a significant influence on leadership selection. PMID:24324651

  8. Influence of anteroposterior mandibular positions on facial attractiveness in Japanese adults.

    PubMed

    Kuroda, Shingo; Sugahara, Takako; Takabatake, Souichirou; Taketa, Hiroaki; Ando, Ryoko; Takano-Yamamoto, Teruko

    2009-01-01

    Our aims in this study were to determine the anteroposterior facial relationship that is regarded as most attractive by Japanese laypersons in a questionnaire survey and to evaluate which analysis of the soft-tissue profile is most suitable for Japanese people. We showed 262 Japanese laypersons (121 male, 141 female) 9 morphed profile images with Point B and menton anteriorly or distally moved by software and asked them to number them sequentially according to their attractiveness. To examine which analysis best reflects facial attractiveness as judged by laypersons, we made 5 types of analyses of the facial profile with 11 variables in the 9 images. The normal face was judged favorably; however, an attractive profile might be different for each subject. The 3 highest ranking profiles (normal face and moderate mandibular retrusions) were often favorites, and 2 profiles (severe mandibular protrusions) were liked the least for most subjects. However, the other images showed a wide range of distribution. Mandibular retrusion was generally more favored than mandibular protrusion and bimaxillary protrusion (severe chin retrusion) had a high attractiveness ranking and was well accepted in the Japanese population. To evaluate the profiles of Japanese subjects, it is important to evaluate not only the esthetic line defined by the nose and chin, but also the balance of the upper and lower lips defined by the posterior reference line--ie, Burstone's Sn-Pog' line.

  9. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    PubMed

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p < .01), (3) males but not females evaluated the images with a relative bias towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.

  10. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    PubMed Central

    Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T.; Alcázar-Ramírez, José D.; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A.

    2015-01-01

    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI. PMID:26664493

  11. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment.

    PubMed

    Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T; Alcázar-Ramírez, José D; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A

    2015-01-01

    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.

  12. Validity and reliability of a structured-light 3D scanner and an ultrasound imaging system for measurements of facial skin thickness.

    PubMed

    Lee, Kang-Woo; Kim, Sang-Hwan; Gil, Young-Chun; Hu, Kyung-Seok; Kim, Hee-Jin

    2017-10-01

    Three-dimensional (3 D)-scanning-based morphological studies of the face are commonly included in various clinical procedures. This study evaluated validity and reliability of a 3 D scanning system by comparing the ultrasound (US) imaging system versus the direct measurement of facial skin. The facial skin thickness at 19 landmarks was measured using the three different methods in 10 embalmed adult Korean cadavers. Skin thickness was first measured using the ultrasound device, then 3 D scanning of the facial skin surface was performed. After the skin on the left half of face was gently dissected, deviating slightly right of the midline, to separate it from the subcutaneous layer, and the harvested facial skin's thickness was measured directly using neck calipers. The dissected specimen was then scanned again, then the scanned images of undissected and dissected faces were superimposed using Morpheus Plastic Solution (version 3.0) software. Finally, the facial skin thickness was calculated from the superimposed images. The ICC value for the correlations between the 3 D scanning system and direct measurement showed excellent reliability (0.849, 95% confidence interval = 0.799-0.887). Bland-Altman analysis showed a good level of agreement between the 3 D scanning system and direct measurement (bias = 0.49 ± 0.49 mm, mean±SD). These results demonstrate that the 3 D scanning system precisely reflects structural changes before and after skin dissection. Therefore, an in-depth morphological study using this 3 D scanning system could provide depth data about the main anatomical structures of face, thereby providing crucial anatomical knowledge for utilization in various clinical applications. Clin. Anat. 30:878-886, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Automatic Detection of Frontal Face Midline by Chain-coded Merlin-Farber Hough Trasform

    NASA Astrophysics Data System (ADS)

    Okamoto, Daichi; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    We propose a novel approach for detection of the facial midline (facial symmetry axis) from a frontal face image. The facial midline has several applications, for instance reducing computational cost required for facial feature extraction (FFE) and postoperative assessment for cosmetic or dental surgery. The proposed method detects the facial midline of a frontal face from an edge image as the symmetry axis using the Merlin-Faber Hough transformation. And a new performance improvement scheme for midline detection by MFHT is present. The main concept of the proposed scheme is suppression of redundant vote on the Hough parameter space by introducing chain code representation for the binary edge image. Experimental results on the image dataset containing 2409 images from FERET database indicate that the proposed algorithm can improve the accuracy of midline detection from 89.9% to 95.1 % for face images with different scales and rotation.

  15. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    PubMed

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  16. Assessment of facial golden proportions among young Japanese women.

    PubMed

    Mizumoto, Yasushi; Deguchi, Toshio; Fong, Kelvin W C

    2009-08-01

    Facial proportions are of interest in orthodontics. The null hypothesis is that there is no difference in golden proportions of the soft-tissue facial balance between Japanese and white women. Facial proportions were assessed by examining photographs of 3 groups of Asian women: group 1, 30 young adult patients with a skeletal Class 1 occlusion; group 2, 30 models; and group 3, 14 popular actresses. Photographic prints or slides were digitized for image analysis. Group 1 subjects had standardized photos taken as part of their treatment. Photos of the subjects in groups 2 and 3 were collected from magazines and other sources and were of varying sizes; therefore, the output image size was not considered. The range of measurement errors was 0.17% to 1.16%. ANOVA was selected because the data set was normally distributed with homogeneous variances. The subjects in the 3 groups showed good total facial proportions. The proportions of the face-height components in group 1 were similar to the golden proportion, which indicated a longer, lower facial height and shorter nose. Group 2 differed from the golden proportion, with a short, lower facial height. Group 3 had golden proportions in all 7 measurements. The proportion of the face width deviated from the golden proportion, indicating a small mouth or wide-set eyes in groups 1 and 2. The null hypothesis was verified in the group 3 actresses in the facial height components. Some measurements in groups 1 and 2 showed different facial proportions that deviated from the golden proportion (ratio).

  17. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  18. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    PubMed

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  19. Enhancing facial features by using clear facial features

    NASA Astrophysics Data System (ADS)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  20. [INVITED] Non-intrusive optical imaging of face to probe physiological traits in Autism Spectrum Disorder

    NASA Astrophysics Data System (ADS)

    Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.

    2016-03-01

    Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.

  1. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    PubMed Central

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-01-01

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference. PMID:27463714

  2. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    PubMed

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  3. Effects of a small talking facial image on autonomic activity: the moderating influence of dispositional BIS and BAS sensitivities and emotions.

    PubMed

    Ravaja, Niklas

    2004-01-01

    We examined the moderating influence of dispositional behavioral inhibition system and behavioral activation system (BAS) sensitivities, Negative Affect, and Positive Affect on the relationship between a small moving vs. static facial image and autonomic responses when viewing/listening to news messages read by a newscaster among 36 young adults. Autonomic parameters measured were respiratory sinus arrhythmia (RSA), low-frequency (LF) component of heart rate variability (HRV), electrodermal activity, and pulse transit time (PTT). The results showed that dispositional BAS sensitivity, particularly BAS Fun Seeking, and Negative Affect interacted with facial image motion in predicting autonomic nervous system activity. A moving facial image was related to lower RSA and LF component of HRV and shorter PTTs as compared to a static facial image among high BAS individuals. Even a small talking facial image may contribute to sustained attentional engagement among high BAS individuals, given that the BAS directs attention toward the positive cue and a moving social stimulus may act as a positive incentive for high BAS individuals.

  4. Evaluation of facial attractiveness in black people according to the subjective facial analysis criteria.

    PubMed

    Melo, Andréa Reis de; Conti, Ana Cláudia de Castro Ferreira; Almeida-Pedrin, Renata Rodrigues; Didier, Victor; Valarelli, Danilo Pinelli; Capelozza Filho, Leopoldino

    2017-02-01

    The objective of this study was to evaluate the facial attractiveness in 30 black individuals, according to the Subjective Facial Analysis criteria. Frontal and profile view photographs of 30 black individuals were evaluated for facial attractiveness and classified as esthetically unpleasant, acceptable, or pleasant by 50 evaluators: the 30 individuals from the sample, 10 orthodontists, and 10 laymen. Besides assessing the facial attractiveness, the evaluators had to identify the structures responsible for the classification as unpleasant and pleasant. Intraexaminer agreement was assessed by using Spearman's correlation, correlation within each category using Kendall concordance coefficient, and correlation between the 3 categories using chi-square test and proportions. Most of the frontal (53. 5%) and profile view (54. 9%) photographs were classified as esthetically acceptable. The structures most identified as esthetically unpleasant were the mouth, lips, and face, in the frontal view; and nose and chin in the profile view. The structures most identified as esthetically pleasant were harmony, face, and mouth, in the frontal view; and harmony and nose in the profile view. The ratings by the examiners in the sample and laymen groups showed statistically significant correlation in both views. The orthodontists agreed with the laymen on the evaluation of the frontal view and disagreed on profile view, especially regarding whether the images were esthetically unpleasant or acceptable. Based on these results, the evaluation of facial attractiveness according to the Subjective Facial Analysis criteria proved to be applicable and to have a subjective influence; therefore, it is suggested that the patient's opinion regarding the facial esthetics should be considered in orthodontic treatmentplanning.

  5. Observer success rates for identification of 3D surface reconstructed facial images and implications for patient privacy and security

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Siddiqui, Khan M.; Fort, Leslie; Moffitt, Ryan; Juluru, Krishna; Kim, Woojin; Safdar, Nabile; Siegel, Eliot L.

    2007-03-01

    3D and multi-planar reconstruction of CT images have become indispensable in the routine practice of diagnostic imaging. These tools cannot only enhance our ability to diagnose diseases, but can also assist in therapeutic planning as well. The technology utilized to create these can also render surface reconstructions, which may have the undesired potential of providing sufficient detail to allow recognition of facial features and consequently patient identity, leading to violation of patient privacy rights as described in the HIPAA (Health Insurance Portability and Accountability Act) legislation. The purpose of this study is to evaluate whether 3D reconstructed images of a patient's facial features can indeed be used to reliably or confidently identify that specific patient. Surface reconstructed images of the study participants were created used as candidates for matching with digital photographs of participants. Data analysis was performed to determine the ability of observers to successfully match 3D surface reconstructed images of the face with facial photographs. The amount of time required to perform the match was recorded as well. We also plan to investigate the ability of digital masks or physical drapes to conceal patient identity. The recently expressed concerns over the inability to truly "anonymize" CT (and MRI) studies of the head/face/brain are yet to be tested in a prospective study. We believe that it is important to establish whether these reconstructed images are a "threat" to patient privacy/security and if so, whether minimal interventions from a clinical perspective can substantially reduce this possibility.

  6. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  7. A prospective analysis of physical examination findings in the diagnosis of facial fractures: Determining predictive value

    PubMed Central

    Timashpolsky, Alisa; Dagum, Alexander B; Sayeed, Syed M; Romeiser, Jamie L; Rosenfeld, Elisheva A; Conkling, Nicole

    2016-01-01

    BACKGROUND There are >150,000 patient visits per year to emergency rooms for facial trauma. The reliability of a computed tomography (CT) scan has made it the primary modality for diagnosing facial skeletal injury, with the physical examination playing more a cursory role. Knowing the predictive value of physical findings in facial skeletal injuries may enable more appropriate use of imaging and health care resources. OBJECTIVE A blinded prospective study was undertaken to assess the predictive value of physical examination findings in detecting maxillofacial fracture in trauma patients, and in determining whether a patient will require surgical intervention. METHODS Over a four-month period, the authors’ team examined patients admitted with facial trauma to the emergency department of their hospital. The evaluating physician completed a standardized physical examination evaluation form indicating the physical findings. Corresponding CT scans and surgical records were then reviewed, and the results recorded by a plastic surgeon who was blinded to the results of the physical examination. RESULTS A total of 57 patients met the inclusion criteria; there were 44 male and 13 female patients. The sensitivity, specificity, positive predictive value and negative predictive value of grouped physical examination findings were determined in major areas. In further analysis, specific examination findings with n≥9 (15%) were also reported. CONCLUSIONS The data demonstrated a high negative predictive value of at least 90% for orbital floor, zygomatic, mandibular and nasal bone fractures compared with CT scan. Furthermore, none of the patients who did not have a physical examination finding for a particular facial fracture required surgery for that fracture. Thus, the instrument performed well at ruling out fractures in these areas when there were none. Ultimately, these results may help reduce unnecessary radiation and costly imaging in patients with facial trauma without facial fractures. PMID:27441188

  8. Three-dimensional head anthropometric analysis

    NASA Astrophysics Data System (ADS)

    Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James

    2003-05-01

    Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).

  9. Assessment of the facial features and chin development of fetuses with use of serial three-dimensional sonography and the mandibular size monogram in a Chinese population.

    PubMed

    Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao

    2004-02-01

    Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (P<.001) statistical analysis suggest that the fetuses develop wider chins and broader facial features in later weeks. The serial 3D sonography and mandibular size monogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.

  10. Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics

    NASA Astrophysics Data System (ADS)

    King, J. M.; Hickling, S.; Elbakri, I. A.; Reed, M.; Wrogemann, J.

    2011-03-01

    The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR) and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining sufficient image quality.

  11. Pose-variant facial expression recognition using an embedded image system

    NASA Astrophysics Data System (ADS)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  12. Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications.

    PubMed

    Corneanu, Ciprian Adrian; Simon, Marc Oliu; Cohn, Jeffrey F; Guerrero, Sergio Escalera

    2016-08-01

    Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.

  13. A method of assessing facial profile attractiveness and its application in comparing the aesthetic preferences of two samples of South Africans.

    PubMed

    Morar, Ajay; Stein, Errol

    2011-06-01

    Numerous studies have evaluated the perception of facial attractiveness. However, many of the instruments previously used have limitations. This study introduces an improved tool and describes its application in the assessment of the preferred facial profile in two sample groups. Cross-sectional study. Two sites were involved: a rural healthcare facility (Winterveldt, Northwest Province) and the campus of the University of the Witwatersrand (Johannesburg, Gauteng Province). Adult females and males selected from amongst first, attendees at the healthcare facility, and second, staff of the University of the Witwatersrand. Eight androgynous lateral facial profile images were created using a morphing software programme representing six transitions between two anchoring extremes in terms of lip retrusion/protrusion vs protrusion/retrusion. These images were presented to, and rated by, two mixed male/female groups of rural and of urban habitat using a pre-piloted form. Statistical analysis of the responses obtained established the preferred facial profile by gender in each group. The perception of facial attractiveness varied marginally between rural and urban black South Africans. There was no statistically significant difference between females and males in the rural group (P=0·2353) and those in the urban sample (P=0·1318) with respect to their choice of ideal facial profile. Females and males in both the rural and urban groups found extreme profile convexity unappealing. By contrast, a larger proportion of rural females, rural males and urban females demonstrated a preference for extreme profile concavity. The research tool described is a useful instrument in the assessment of facial profile attractiveness.

  14. Improvement of the facial evenness of leave-on skincare products by a modified application method in Chinese women.

    PubMed

    Zou, Y; Wang, X; Fan, G

    2015-04-01

    To understand the habits of Chinese women applying leave-on skincare products (LOSCP) and to improve female facial evenness of anti-ageing cosmetics through modifying facial skincare smear ways. A questionnaire on the method of applying LOSCP was distributed to 60 women with habit of using LOSCP. Their facial images before and after applying LOSCP were taken, and their positioning and grey value were used to analyse the effects of different applying methods on the uniformity of facial LOSCP. LOSCP including anti-ageing cosmetics have been widely used among Chinese women for a long time. However, some women do not concern how to properly apply LOSCP. In our survey, the main focal points of the face are forehead, malar region, cheek, mouth corners and chin when they looking into the mirror, and mouth corners and inner canthus are often overlooked when applying cosmetic products. The image analysis found that after applying the LOSCP, the greyscale of the forehead, glabella, malar region, upper lip region and jaw changed significantly whereas that of canthus, mouth corners and lateral cheek region was not significantly different. Applying an improved smear method (11-point method)could significantly increase the grey values of various facial areas. The way of Chinese women applying LOSCP may result in facial unevenness of skin products. By improving facial skincare smear method, one can make the products even in all facial areas, thereby ensuring the efficacy of anti-ageing cosmetics. Thus, further improvement and education regarding skincare is required. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  15. Effects of Objective 3-Dimensional Measures of Facial Shape and Symmetry on Perceptions of Facial Attractiveness.

    PubMed

    Hatch, Cory D; Wehby, George L; Nidey, Nichole L; Moreno Uribe, Lina M

    2017-09-01

    Meeting patient desires for enhanced facial esthetics requires that providers have standardized and objective methods to measure esthetics. The authors evaluated the effects of objective 3-dimensional (3D) facial shape and asymmetry measurements derived from 3D facial images on perceptions of facial attractiveness. The 3D facial images of 313 adults in Iowa were digitized with 32 landmarks, and objective 3D facial measurements capturing symmetric and asymmetric components of shape variation, centroid size, and fluctuating asymmetry were obtained from the 3D coordinate data using geo-morphometric analyses. Frontal and profile images of study participants were rated for facial attractiveness by 10 volunteers (5 women and 5 men) on a 5-point Likert scale and a visual analog scale. Multivariate regression was used to identify the effects of the objective 3D facial measurements on attractiveness ratings. Several objective 3D facial measurements had marked effects on attractiveness ratings. Shorter facial heights with protrusive chins, midface retrusion, faces with protrusive noses and thin lips, flat mandibular planes with deep labiomental folds, any cants of the lip commissures and floor of the nose, larger faces overall, and increased fluctuating asymmetry were rated as significantly (P < .001) less attractive. Perceptions of facial attractiveness can be explained by specific 3D measurements of facial shapes and fluctuating asymmetry, which have important implications for clinical practice and research. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Comparison of 3D Scanning Versus 2D Photography for the Identification of Facial Soft-Tissue Landmarks.

    PubMed

    Zogheib, T; Jacobs, R; Bornstein, M M; Agbaje, J O; Anumendem, D; Klazen, Y; Politis, C

    2018-01-01

    Three dimensional facial scanning is an innovation that provides opportunity for digital data acquisition, smile analysis and communication of treatment plan and outcome with patients. To assess the applicability of 3D facial scanning as compared to 2D clinical photography. Sample consisted of thirty Caucasians aged between 25 and 50 years old, without any dentofacial deformities. Fifteen soft-tissue facial landmarks were identified twice by 3 observers on 2D and 3D images of the 30 subjects. Five linear proportions and nine angular measurements were established in the orbital, nasal and oral regions. These data were compared to anthropometric norms of young Caucasians. Furthermore, a questionnaire was completed by 14 other observers, according to their personal judgment of the 2D and 3D images. Quantitatively, proportions linking the three facial regions in 3D were closer to the clinical standard (for 2D 3.3% and for 3D 1.8% error rate). Qualitatively, in 67% of the cases, observers were as confident about 3D as they were about 2D. Intra-observer Correlation Coefficient (ICC) revealed a better agreement between observers in 3D for the questions related to facial form, lip step and chin posture. The laser facial scanning could be a useful and reliable tool to analyze the circumoral region for orthodontic and orthognathic treatments as well as for plastic surgery planning and outcome.

  17. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma

    PubMed Central

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.

    2014-01-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy. PMID:25083397

  18. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma.

    PubMed

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W

    2014-08-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.

  19. Facial neuropathy with imaging enhancement of the facial nerve: a case report

    PubMed Central

    Mumtaz, Sehreen; Jensen, Matthew B

    2014-01-01

    A young women developed unilateral facial neuropathy 2 weeks after a motor vehicle collision involving fractures of the skull and mandible. MRI showed contrast enhancement of the facial nerve. We review the literature describing facial neuropathy after trauma and facial nerve enhancement patterns with different causes of facial neuropathy. PMID:25574155

  20. Two Japanese patients with Leigh syndrome caused by novel SURF1 mutations.

    PubMed

    Tanigawa, Junpei; Kaneko, Kaori; Honda, Masakazu; Harashima, Hiroko; Murayama, Kei; Wada, Takahito; Takano, Kyoko; Iai, Mizue; Yamashita, Sumimasa; Shimbo, Hiroko; Aida, Noriko; Ohtake, Akira; Osaka, Hitoshi

    2012-11-01

    We report two patients with Leigh syndrome that showed a combination of facial dysmorphism and MRI imaging indicating an SURF1 deficiency, which was confirmed by sequence analysis. Case 1 is a 3-year-old girl with failure to thrive and developmental delay. She presented with tachypnea at rest and displayed facial dysmorphism including frontal bossing, lateral displacement of inner canthi, esotropia, maxillary hypoplasia, slightly upturned nostril, and hypertrichosis dominant on the forehead and extremities. Case 2 is an 8-year-old boy with respiratory failure. He had been diagnosed as selective complex IV deficiency. Case 2 displayed facial dysmorphism and hypertrichosis. Since both patients displayed characteristic facial dysmorphism and MRI findings, we sequenced the SURF1 gene and identified two heterozygous mutations; c.49+1 G>T and c.752_753del in Case 1, and homozygous c.743 C>A in Case 2. For patients with Leigh syndrome showing these facial dysmorphism and hypertrichosis, sequence analysis of the SURF1 gene may be useful. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  1. Use of Computer Imaging in Rhinoplasty: A Survey of the Practices of Facial Plastic Surgeons.

    PubMed

    Singh, Prabhjyot; Pearlman, Steven

    2017-08-01

    The objective of this study was to quantify the use of computer imaging by facial plastic surgeons. AAFPRS Facial plastic surgeons were surveyed about their use of computer imaging during rhinoplasty consultations. The survey collected information about surgeon demographics, practice settings, practice patterns, and rates of computer imaging (CI) for primary and revision rhinoplasty. For those surgeons who used CI, additional information was also collected, which included who performed the imaging and whether the patient was given the morphed images after the consultation. A total of 238 out of 1200 (19.8%) facial plastic surgeons responded to the survey. Out of those who responded, 195 surgeons (83%) were board certified by the American Board of Facial Plastic and Reconstructive Surgeons (ABFPRS). The majority of respondents (150 surgeons, 63%) used CI during rhinoplasty consultation. Of the surgeons who use CI, 92% performed the image morphing themselves. Approximately two-thirds of surgeons who use CI gave their patient a printout of the morphed images after the consultation. Computer imaging (CI) is a frequently utilized tool for facial plastic surgeons during cosmetic consultations with patients. Based on these results of this study, it can be suggested that the majority of facial plastic surgeons who use CI do so for both primary and revision rhinoplasty. As more sophisticated systems become available, it is possible that utilization of CI modalities will increase. This provides the surgeon with further tools to use at his or her disposal during discussion of aesthetic surgery. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  2. Early Changes in Facial Profile Following Structured Filler Rhinoplasty: An Anthropometric Analysis Using a 3-Dimensional Imaging System.

    PubMed

    Rho, Nark Kyoung; Park, Je Young; Youn, Choon Shik; Lee, Soo-Keun; Kim, Hei Sung

    2017-02-01

    Quantitative measurements are important for objective evaluation of postprocedural outcomes. Three-dimensional (3D) imaging is known as an objective, accurate, and reliable system for quantifying the soft tissue dimensions of the face. To compare the preprocedural and acute postprocedural nasofrontal, nasofacial, nasolabial, and nasomental angles, early changes in the height and length of the nose, and nasal volume using a 3D surface imaging with a light-emitting diode. The 3D imaging analysis of 40 Korean women who underwent structured nonsurgical rhinoplasty was conducted. The 3D assessment was performed before, immediately after, 1 day, and 2 weeks after filler rhinoplasty with a Morpheus 3D scanner (Morpheus Co., Seoul, Korea). There were significant early changes in facial profile following nonsurgical rhinoplasty with a hyaluronic acid filler. An average increase of 6.03° in the nasofrontal angle, an increase of 3.79° in the nasolabial angle, increase of 0.88° in the nasomental angle, and a reduction of 0.83° in the nasofacial angle was observed at 2 weeks of follow-up. Increment in nasal volume and nose height was also found after 2 weeks. Side effects, such as hematoma, nodules, and skin necrosis, were not observed. The 3D surface imaging quantitatively demonstrated the early changes in facial profile after structured filler rhinoplasty. The study results describe significant acute spatial changes in nose shape following treatment.

  3. Laptop Computer - Based Facial Recognition System Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results.more » After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in remote locations. Remote users could perform real-time searches where network connectivity is not available. As images are enrolled at the remote locations, periodic database synchronization is necessary.« less

  4. Facial Phenotyping by Quantitative Photography Reflects Craniofacial Morphology Measured on Magnetic Resonance Imaging in Icelandic Sleep Apnea Patients

    PubMed Central

    Sutherland, Kate; Schwab, Richard J.; Maislin, Greg; Lee, Richard W.W.; Benedikstdsottir, Bryndis; Pack, Allan I.; Gislason, Thorarinn; Juliusson, Sigurdur; Cistulli, Peter A.

    2014-01-01

    Study Objectives: (1) To determine whether facial phenotype, measured by quantitative photography, relates to underlying craniofacial obstructive sleep apnea (OSA) risk factors, measured with magnetic resonance imaging (MRI); (2) To assess whether these associations are independent of body size and obesity. Design: Cross-sectional cohort. Setting: Landspitali, The National University Hospital, Iceland. Participants: One hundred forty patients (87.1% male) from the Icelandic Sleep Apnea Cohort who had both calibrated frontal and profile craniofacial photographs and upper airway MRI. Mean ± standard deviation age 56.1 ± 10.4 y, body mass index 33.5 ± 5.05 kg/m2, with on-average severe OSA (apnea-hypopnea index 45.4 ± 19.7 h-1). Interventions: N/A. Measurements and Results: Relationships between surface facial dimensions (photos) and facial bony dimensions and upper airway soft-tissue volumes (MRI) was assessed using canonical correlation analysis. Photo and MRI craniofacial datasets related in four significant canonical correlations, primarily driven by measurements of (1) maxillary-mandibular relationship (r = 0.8, P < 0.0001), (2) lower face height (r = 0.76, P < 0.0001), (3) mandibular length (r = 0.67, P < 0.0001), and (4) tongue volume (r = 0.52, P = 0.01). Correlations 1, 2, and 3 were unchanged when controlled for weight and neck and waist circumference. However, tongue volume was no longer significant, suggesting facial dimensions relate to tongue volume as a result of obesity. Conclusions: Significant associations were found between craniofacial variable sets from facial photography and MRI. This study confirms that facial photographic phenotype reflects underlying aspects of craniofacial skeletal abnormalities associated with OSA. Therefore, facial photographic phenotyping may be a useful tool to assess intermediate phenotypes for OSA, particularly in large-scale studies. Citation: Sutherland K, Schwab RJ, Maislin G, Lee RW, Benedikstdsottir B, Pack AI, Gislason T, Juliusson S, Cistulli PA. Facial phenotyping by quantitative photography reflects craniofacial morphology measured on magnetic resonance imaging in icelandic sleep apnea patients. SLEEP 2014;37(5):959-968. PMID:24790275

  5. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    PubMed

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  6. Changing perception: facial reanimation surgery improves attractiveness and decreases negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick J; Ishii, Lisa E

    2014-01-01

    Determine the effect of facial reanimation surgery on observer-graded attractiveness and negative facial perception of patients with facial paralysis. Randomized controlled experiment. Ninety observers viewed images of paralyzed faces, smiling and in repose, before and after reanimation surgery, as well as normal comparison faces. Observers rated the attractiveness of each face and characterized the paralyzed faces by rating severity, disfigured/bothersome, and importance to repair. Iterated factor analysis indicated these highly correlated variables measure a common domain, so they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score. Mixed effects linear regression determined the effect of facial reanimation surgery on attractiveness and DIBS score. Facial paralysis induces an attractiveness penalty of 2.51 on a 10-point scale for faces in repose and 3.38 for smiling faces. Mixed effects linear regression showed that reanimation surgery improved attractiveness for faces both in repose and smiling by 0.84 (95% confidence interval [CI]: 0.67, 1.01) and 1.24 (95% CI: 1.07, 1.42) respectively. Planned hypothesis tests confirmed statistically significant differences in attractiveness ratings between postoperative and normal faces, indicating attractiveness was not completely normalized. Regression analysis also showed that reanimation surgery decreased DIBS by 0.807 (95% CI: 0.704, 0.911) for faces in repose and 0.989 (95% CI: 0.886, 1.093), an entire standard deviation, for smiling faces. Facial reanimation surgery increases attractiveness and decreases negative facial perception of patients with facial paralysis. These data emphasize the need to optimize reanimation surgery to restore not only function, but also symmetry and cosmesis to improve facial perception and patient quality of life. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  7. An Assessment of How Facial Mimicry Can Change Facial Morphology: Implications for Identification.

    PubMed

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    The assessment of facial mimicry is important in forensic anthropology; in addition, the application of modern 3D image acquisition systems may help for the analysis of facial surfaces. This study aimed at exposing a novel method for comparing 3D profiles in different facial expressions. Ten male adults, aged between 30 and 40 years, underwent acquisitions by stereophotogrammetry (VECTRA-3D ® ) with different expressions (neutral, happy, sad, angry, surprised). The acquisition of each individual was then superimposed on the neutral one according to nine landmarks, and the root mean square (RMS) value between the two expressions was calculated. The highest difference in comparison with the neutral standard was shown by the happy expression (RMS 4.11 mm), followed by the surprised (RMS 2.74 mm), sad (RMS 1.3 mm), and angry ones (RMS 1.21 mm). This pilot study shows that the 3D-3D superimposition may provide reliable results concerning facial alteration due to mimicry. © 2016 American Academy of Forensic Sciences.

  8. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    PubMed

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  9. A study of patient facial expressivity in relation to orthodontic/surgical treatment.

    PubMed

    Nafziger, Y J

    1994-09-01

    A dynamic analysis of the faces of patients seeking an aesthetic restoration of facial aberrations with orthognathic treatment requires (besides the routine static study, such as records, study models, photographs, and cephalometric tracings) the study of their facial expressions. To determine a classification method for the units of expressive facial behavior, the mobility of the face is studied with the aid of the facial action coding system (FACS) created by Ekman and Friesen. With video recordings of faces and photographic images taken from the video recordings, the authors have modified a technique of facial analysis structured on the visual observation of the anatomic basis of movement. The technique, itself, is based on the defining of individual facial expressions and then codifying such expressions through the use of minimal, anatomic action units. These action units actually combine to form facial expressions. With the help of FACS, the facial expressions of 18 patients before and after orthognathic surgery, and six control subjects without dentofacial deformation have been studied. I was able to register 6278 facial expressions and then further define 18,844 action units, from the 6278 facial expressions. A classification of the facial expressions made by subject groups and repeated in quantified time frames has allowed establishment of "rules" or "norms" relating to expression, thus further enabling the making of comparisons of facial expressiveness between patients and control subjects. This study indicates that the facial expressions of the patients were more similar to the facial expressions of the controls after orthognathic surgery. It was possible to distinguish changes in facial expressivity in patients after dentofacial surgery, the type and degree of change depended on the facial structure before surgery. Changes noted tended toward a functioning that is identical to that of subjects who do not suffer from dysmorphosis and toward greater lip competence, particularly the function of the orbicular muscle of the lips, with reduced compensatory activity of the lower lip and the chin. The results of our study are supported by the clinical observations and suggest that the FACS technique should be able to provide a coding for the study of facial expression.

  10. A new approach for the analysis of facial growth and age estimation: Iris ratio

    PubMed Central

    Machado, Carlos Eduardo Palhares; Flores, Marta Regina Pinheiro; Lima, Laíse Nascimento Correia; Tinoco, Rachel Lima Ribeiro; Bezerra, Ana Cristina Barreto; Evison, Martin Paul; Guimarães, Marco Aurélio

    2017-01-01

    The study of facial growth is explored in many fields of science, including anatomy, genetics, and forensics. In the field of forensics, it acts as a valuable tool for combating child pornography. The present research proposes a new method, based on relative measurements and fixed references of the human face—specifically considering measurements of the diameter of the iris (iris ratio)—for the analysis of facial growth in association with age in children and sub-adults. The experimental sample consisted of digital photographs of 1000 Brazilian subjects, aged between 6 and 22 years, distributed equally by sex and divided into five specific age groups (6, 10, 14, 18, and 22 year olds ± one month). The software package SAFF-2D® (Forensic Facial Analysis System, Brazilian Federal Police, Brazil) was used for positioning 11 landmarks on the images. Ten measurements were calculated and used as fixed references to evaluate the growth of the other measurements for each age group, as well the accumulated growth (6–22 years old). The Intraclass Correlation Coefficient (ICC) was applied for the evaluation of intra-examiner and inter-examiner reliability within a specific set of images. Pearson’s Correlation Coefficient was used to assess the association between each measurement taken and the respective age groups. ANOVA and Post-hoc Tukey tests were used to search for statistical differences between the age groups. The outcomes indicated that facial structures grow with different timing in children and adolescents. Moreover, the growth allometry expressed in this study may be used to understand what structures have more or less proportional variation in function for the age ranges studied. The diameter of the iris was found to be the most stable measurement compared to the others and represented the best cephalometric measurement as a fixed reference for facial growth ratios (or indices). The method described shows promising potential for forensic applications, especially as part of the armamentarium against crimes involving child pornography and child abuse. PMID:28686631

  11. A new approach for the analysis of facial growth and age estimation: Iris ratio.

    PubMed

    Machado, Carlos Eduardo Palhares; Flores, Marta Regina Pinheiro; Lima, Laíse Nascimento Correia; Tinoco, Rachel Lima Ribeiro; Franco, Ademir; Bezerra, Ana Cristina Barreto; Evison, Martin Paul; Guimarães, Marco Aurélio

    2017-01-01

    The study of facial growth is explored in many fields of science, including anatomy, genetics, and forensics. In the field of forensics, it acts as a valuable tool for combating child pornography. The present research proposes a new method, based on relative measurements and fixed references of the human face-specifically considering measurements of the diameter of the iris (iris ratio)-for the analysis of facial growth in association with age in children and sub-adults. The experimental sample consisted of digital photographs of 1000 Brazilian subjects, aged between 6 and 22 years, distributed equally by sex and divided into five specific age groups (6, 10, 14, 18, and 22 year olds ± one month). The software package SAFF-2D® (Forensic Facial Analysis System, Brazilian Federal Police, Brazil) was used for positioning 11 landmarks on the images. Ten measurements were calculated and used as fixed references to evaluate the growth of the other measurements for each age group, as well the accumulated growth (6-22 years old). The Intraclass Correlation Coefficient (ICC) was applied for the evaluation of intra-examiner and inter-examiner reliability within a specific set of images. Pearson's Correlation Coefficient was used to assess the association between each measurement taken and the respective age groups. ANOVA and Post-hoc Tukey tests were used to search for statistical differences between the age groups. The outcomes indicated that facial structures grow with different timing in children and adolescents. Moreover, the growth allometry expressed in this study may be used to understand what structures have more or less proportional variation in function for the age ranges studied. The diameter of the iris was found to be the most stable measurement compared to the others and represented the best cephalometric measurement as a fixed reference for facial growth ratios (or indices). The method described shows promising potential for forensic applications, especially as part of the armamentarium against crimes involving child pornography and child abuse.

  12. Brief report: Representational momentum for dynamic facial expressions in pervasive developmental disorder.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-03-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of expressed emotion in 13 individuals with PDD and 13 typically developing controls. We presented dynamic and static emotional (fearful and happy) expressions. Participants were asked to match a changeable emotional face display with the last presented image. The results showed that both groups perceived the last image of dynamic facial expression to be more emotionally exaggerated than the static facial expression. This finding suggests that individuals with PDD have an intact perceptual mechanism for processing dynamic information in another individual's face.

  13. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  14. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  15. Reduced white matter integrity and facial emotion perception in never-medicated patients with first-episode schizophrenia: A diffusion tensor imaging study.

    PubMed

    Zhao, Xiaoxin; Sui, Yuxiu; Yao, Jingjing; Lv, Yiding; Zhang, Xinyue; Jin, Zhuma; Chen, Lijun; Zhang, Xiangrong

    2017-07-03

    Facial emotion perception is impaired in schizophrenia. Although the pathology of schizophrenia is thought to involve abnormality in white matter (WM), few studies have examined the correlation between facial emotion perception and WM abnormalities in never-medicated patients with first-episode schizophrenia. The present study tested associations between facial emotion perception and WM integrity in order to investigate the neural basis of impaired facial emotion perception in schizophrenia. Sixty-three schizophrenic patients and thirty control subjects underwent facial emotion categorization (FEC). The FEC data was inserted into a logistic function model with subsequent analysis by independent-samples T test and the shift point and slope as outcome measurements. Severity of symptoms was measured using a five-factor model of the Positive and Negative Syndrome Scale (PANSS). Voxelwise group comparison of WM fractional anisotropy (FA) was operated using tract-based spatial statistics (TBSS). The correlation between impaired facial emotion perception and FA reduction was examined in patients using simple regression analysis within brain areas that showed a significant FA reduction in patients compared with controls. The same correlation analysis was also performed for control subjects in the whole brain. The patients with schizophrenia reported a higher shift point and a steeper slope than control subjects in FEC. The patients showed a significant FA reduction in left deep WM in the parietal, temporal and occipital lobes, a small portion of the corpus callosum (CC), and the corona radiata. In voxelwise correlation analysis, we found that facial emotion perception significantly correlated with reduced FA in various WM regions, including left forceps major (FM), inferior longitudinal fasciculus (ILF), inferior fronto-occipital fasciculus (IFOF), Left splenium of CC, and left ILF. The correlation analyses in healthy controls revealed no significant correlation of FA with FEC task. These results showed disrupted WM integrity in these regions constitutes a potential neural basis for the facial emotion perception impairments in schizophrenia. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Performance of a Working Face Recognition Machine using Cortical Thought Theory

    DTIC Science & Technology

    1984-12-04

    been considered (2). Recommendations from Bledsoe’s study included research on facial - recognition systems that are "completely automatic (remove the...C. L. Location of some facial features . computer, Palo Alto: Panoramic Research, Aug 1966. 2. Bledsoe, W. W. Man-machine facial recognition : Is...34 image?" It would seem - that the location and size of the features left in this contrast-expanded image contain the essential information of facial

  17. Effect of frontal facial type and sex on preferred chin projection.

    PubMed

    Choi, Jin-Young; Kim, Taeyun; Kim, Hyung-Mo; Lee, Sang-Hoon; Cho, Il-Sik; Baek, Seung-Hak

    2017-03-01

    To investigate the effects of frontal facial type (FFT) and sex on preferred chin projection (CP) in three-dimensional (3D) facial images. Six 3D facial images were acquired using a 3D facial scanner (euryprosopic [Eury-FFT], mesoprosopic [Meso-FFT], and leptoprosopic [Lepto-FFT] for each sex). After normal CP in each 3D facial image was set to 10° of the facial profile angle (glabella-subnasale-pogonion), CPs were morphed by gradations of 2° from normal (moderately protrusive [6°], slightly protrusive [8°], slightly retrusive [12°], and moderately retrusive [14°]). Seventy-five dental students (48 men and 27 women) were asked to rate the CPs (6°, 8°, 10°, 12°, and 14°) from the most to least preferred in each 3D image. Statistical analyses included the Kolmogorov-Smirnov test, Kruskal-Wallis test, and Bonferroni correction. No significant difference was observed in the distribution of preferred CP in the same FFT between male and female evaluators. In Meso-FFT, the normal CP was the most preferred without any sex difference. However, in Eury-FFT, the slightly protrusive CP was favored in male 3D images, but the normal CP was preferred in female 3D images. In Lepto-FFT, the normal CP was favored in male 3D images, whereas the slightly retrusive CP was favored in female 3D images. The mean preferred CP angle differed significantly according to FFT (Eury-FFT: male, 8.7°, female, 9.9°; Meso-FFT: male, 9.8°, female, 10.7°; Lepto-FFT: male, 10.8°, female, 11.4°; p < 0.001). Our findings might serve as guidelines for setting the preferred CP according to FFT and sex.

  18. Is moral beauty different from facial beauty? Evidence from an fMRI study

    PubMed Central

    Wang, Tingting; Mo, Ce; Tan, Li Hai; Cant, Jonathan S.; Zhong, Luojin; Cupchik, Gerald

    2015-01-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts ‘facial aesthetic judgment > facial gender judgment’ and ‘scene moral aesthetic judgment > scene gender judgment’ identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. PMID:25298010

  19. Facial soft biometric features for forensic face recognition.

    PubMed

    Tome, Pedro; Vera-Rodriguez, Ruben; Fierrez, Julian; Ortega-Garcia, Javier

    2015-12-01

    This paper proposes a functional feature-based approach useful for real forensic caseworks, based on the shape, orientation and size of facial traits, which can be considered as a soft biometric approach. The motivation of this work is to provide a set of facial features, which can be understood by non-experts such as judges and support the work of forensic examiners who, in practice, carry out a thorough manual comparison of face images paying special attention to the similarities and differences in shape and size of various facial traits. This new approach constitutes a tool that automatically converts a set of facial landmarks to a set of features (shape and size) corresponding to facial regions of forensic value. These features are furthermore evaluated in a population to generate statistics to support forensic examiners. The proposed features can also be used as additional information that can improve the performance of traditional face recognition systems. These features follow the forensic methodology and are obtained in a continuous and discrete manner from raw images. A statistical analysis is also carried out to study the stability, discrimination power and correlation of the proposed facial features on two realistic databases: MORPH and ATVS Forensic DB. Finally, the performance of both continuous and discrete features is analyzed using different similarity measures. Experimental results show high discrimination power and good recognition performance, especially for continuous features. A final fusion of the best systems configurations achieves rank 10 match results of 100% for ATVS database and 75% for MORPH database demonstrating the benefits of using this information in practice. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Advances in computer imaging/applications in facial plastic surgery.

    PubMed

    Papel, I D; Jiannetto, D F

    1999-01-01

    Rapidly progressing computer technology, ever-increasing expectations of patients, and a confusing medicolegal environment requires a clarification of the role of computer imaging/applications. Advances in computer technology and its applications are reviewed. A brief historical discussion is included for perspective. Improvements in both hardware and software with the advent of digital imaging have allowed great increases in speed and accuracy in patient imaging. This facilitates doctor-patient communication and possibly realistic patient expectations. Patients seeking cosmetic surgery now often expect preoperative imaging. Although society in general has become more litigious, a literature search up to 1998 reveals no lawsuits directly involving computer imaging. It appears that conservative utilization of computer imaging by the facial plastic surgeon may actually reduce liability and promote communication. Recent advances have significantly enhanced the value of computer imaging in the practice of facial plastic surgery. These technological advances in computer imaging appear to contribute a useful technique for the practice of facial plastic surgery. Inclusion of computer imaging should be given serious consideration as an adjunct to clinical practice.

  1. Facial expression system on video using widrow hoff

    NASA Astrophysics Data System (ADS)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  2. The use of three-dimensional imaging to evaluate the effect of conventional orthodontic approach in treating a subject with facial asymmetry

    PubMed Central

    Kheir, Nadia Abou; Kau, Chung How

    2016-01-01

    The growth of the craniofacial skeleton takes place from the 3rd week of intra-uterine life until 18 years of age. During this period, the craniofacial complex is affected by extrinsic and intrinsic factors which guide or alter the pattern of growth. Asymmetry can be encountered due to these multifactorial effects or as the normal divergence of the hemifacial counterpart occurs. At present, an orthodontist plays a major role not only in diagnosing dental asymmetry but also facial asymmetry. However, an orthodontist's role in treating or camouflaging the asymmetry can be limited due to the severity. The aim of this research is to report a technique for facial three-dimensional (3D) analysis used to measure the progress of nonsurgical orthodontic treatment approach for a subject with maxillary asymmetry combined with mandibular angular asymmetry. The facial analysis was composed of five parts: Upper face asymmetry analysis, maxillary analysis, maxillary cant analysis, mandibular cant analysis, and mandibular asymmetry analysis which were applied using 3D software InVivoDental 5.2.3 (Anatomage Company, San Jose, CA, USA). The five components of the facial analysis were applied in the initial cone-beam computed tomography (T1) for diagnosis. Maxillary analysis, maxillary cant analysis, and mandibular cant analysis were applied to measure the progress of the orthodontics treatment (T2). Twenty-two linear measurements bilaterally and sixteen angular criteria were used to analyze the facial structures using different anthropometric landmarks. Only angular mandibular asymmetry was reported. However, the subject had maxillary alveolar ridge cant of 9.96°and dental maxillary cant was 2.95° in T1. The mandibular alveolar ridge cant was 7.41° and the mandibular dental cant was 8.39°. Highest decrease in the cant was reported maxillary alveolar ridge around 2.35° and in the mandibular alveolar ridge around 3.96° in T2. Facial 3D analysis is considered a useful adjunct in evaluating inter-arch biomechanics. PMID:27563618

  3. Differential amygdala response during facial recognition in patients with schizophrenia: an fMRI study.

    PubMed

    Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y

    2002-09-01

    Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.

  4. Recognition of children on age-different images: Facial morphology and age-stable features.

    PubMed

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  5. A neurophysiological study of facial numbness in multiple sclerosis: Integration with clinical data and imaging findings.

    PubMed

    Koutsis, Georgios; Kokotis, Panagiotis; Papagianni, Aikaterini E; Evangelopoulos, Maria-Eleftheria; Kilidireas, Constantinos; Karandreas, Nikolaos

    2016-09-01

    To integrate neurophysiological findings with clinical and imaging data in a consecutive series of multiple sclerosis (MS) patients developing facial numbness during the course of an MS attack. Nine consecutive patients with MS and recent-onset facial numbness were studied clinically, imaged with routine MRI, and assessed neurophysiologically with trigeminal somatosensory evoked potential (TSEP), blink reflex (BR), masseter reflex (MR), facial nerve conduction, facial muscle and masseter EMG studies. All patients had unilateral facial hypoesthesia on examination and lesions in the ipsilateral pontine tegmentum on MRI. All patients had abnormal TSEPs upon stimulation of the affected side, excepting one that was tested following remission of numbness. BR was the second most sensitive neurophysiological method with 6/9 examinations exhibiting an abnormal R1 component. The MR was abnormal in 3/6 patients, always on the affected side. Facial conduction and EMG studies were normal in all patients but one. Facial numbness was always related to abnormal TSEPs. A concomitant R1 abnormality on BR allowed localization of the responsible pontine lesion, which closely corresponded with MRI findings. We conclude that neurophysiological assessment of MS patients with facial numbness is a sensitive tool, which complements MRI, and can improve lesion localization. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Biomedical visual data analysis to build an intelligent diagnostic decision support system in medical genetics.

    PubMed

    Kuru, Kaya; Niranjan, Mahesan; Tunca, Yusuf; Osvank, Erhan; Azim, Tayyaba

    2014-10-01

    In general, medical geneticists aim to pre-diagnose underlying syndromes based on facial features before performing cytological or molecular analyses where a genotype-phenotype interrelation is possible. However, determining correct genotype-phenotype interrelationships among many syndromes is tedious and labor-intensive, especially for extremely rare syndromes. Thus, a computer-aided system for pre-diagnosis can facilitate effective and efficient decision support, particularly when few similar cases are available, or in remote rural districts where diagnostic knowledge of syndromes is not readily available. The proposed methodology, visual diagnostic decision support system (visual diagnostic DSS), employs machine learning (ML) algorithms and digital image processing techniques in a hybrid approach for automated diagnosis in medical genetics. This approach uses facial features in reference images of disorders to identify visual genotype-phenotype interrelationships. Our statistical method describes facial image data as principal component features and diagnoses syndromes using these features. The proposed system was trained using a real dataset of previously published face images of subjects with syndromes, which provided accurate diagnostic information. The method was tested using a leave-one-out cross-validation scheme with 15 different syndromes, each of comprised 5-9 cases, i.e., 92 cases in total. An accuracy rate of 83% was achieved using this automated diagnosis technique, which was statistically significant (p<0.01). Furthermore, the sensitivity and specificity values were 0.857 and 0.870, respectively. Our results show that the accurate classification of syndromes is feasible using ML techniques. Thus, a large number of syndromes with characteristic facial anomaly patterns could be diagnosed with similar diagnostic DSSs to that described in the present study, i.e., visual diagnostic DSS, thereby demonstrating the benefits of using hybrid image processing and ML-based computer-aided diagnostics for identifying facial phenotypes. Copyright © 2014. Published by Elsevier B.V.

  7. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    PubMed

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  8. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity

    PubMed Central

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882

  9. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    PubMed

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Factors contributing to the adaptation aftereffects of facial expression.

    PubMed

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  11. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-03-19

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  12. Near-infrared imaging of face transplants: are both pedicles necessary?

    PubMed

    Nguyen, John T; Ashitate, Yoshitomo; Venugopal, Vivek; Neacsu, Florin; Kettenring, Frank; Frangioni, John V; Gioux, Sylvain; Lee, Bernard T

    2013-09-01

    Facial transplantation is a complex procedure that corrects severe facial defects due to traumas, burns, and congenital disorders. Although face transplantation has been successfully performed clinically, potential risks include tissue ischemia and necrosis. The vascular supply is typically based on the bilateral neck vessels. As it remains unclear whether perfusion can be based off a single pedicle, this study was designed to assess perfusion patterns of facial transplant allografts using near-infrared (NIR) fluorescence imaging. Upper facial composite tissue allotransplants were created using both carotid artery and external jugular vein pedicles in Yorkshire pigs. A flap validation model was created in n = 2 pigs and a clamp occlusion model was performed in n = 3 pigs. In the clamp occlusion models, sequential clamping of the vessels was performed to assess perfusion. Animals were injected with indocyanine green and imaged with NIR fluorescence. Quantitative metrics were assessed based on fluorescence intensity. With NIR imaging, arterial perforators emitted fluorescence indicating perfusion along the surface of the skin. Isolated clamping of one vascular pedicle showed successful perfusion across the midline based on NIR fluorescence imaging. This perfusion extended into the facial allograft within 60 s and perfused the entire contralateral side within 5 min. Determination of vascular perfusion is important in microsurgical constructs as complications can lead to flap loss. It is still unclear if facial transplants require both pedicles. This initial pilot study using intraoperative NIR fluorescence imaging suggests that facial flap models can be adequately perfused from a single pedicle. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    PubMed

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    PubMed

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  15. [Magnetic resonance imaging in facial injuries and digital fusion CT/MRI].

    PubMed

    Kozakiewicz, Marcin; Olszycki, Marek; Arkuszewski, Piotr; Stefańczyk, Ludomir

    2006-01-01

    Magnetic resonance images [MRI] and their digital fusion with computed tomography [CT] data, observed in patients affected with facial injuries, are presented in this study. The MR imaging of 12 posttraumatic patients was performed in the same plains as their previous CT scans. Evaluation focused on quality of the facial soft tissues depicting, which was unsatisfactory in CT. Using the own "Dental Studio" programme the digital fusion of the both modalities was performed. Pathologic dislocations and injures of facial soft tissues are visualized better in MRI than in CT examination. Especially MRI properly reveals disturbances in intraorbital soft structures. MRI-based assessment is valuable in patients affected with facial soft tissues injuries, especially in case of orbita/sinuses hernia. Fusion CT/MRI scans allows to evaluate simultaneously bone structure and soft tissues of the same region.

  16. Combination of Face Regions in Forensic Scenarios.

    PubMed

    Tome, Pedro; Fierrez, Julian; Vera-Rodriguez, Ruben; Ortega-Garcia, Javier

    2015-07-01

    This article presents an experimental analysis of the combination of different regions of the human face on various forensic scenarios to generate scientific knowledge useful for the forensic experts. Three scenarios of interest at different distances are considered comparing mugshot and CCTV face images using MORPH and SC face databases. One of the main findings is that inner facial regions combine better in mugshot and close CCTV scenarios and outer facial regions combine better in far CCTV scenarios. This means, that depending of the acquisition distance, the discriminative power of the facial regions change, having in some cases better performance than the full face. This effect can be exploited by considering the fusion of facial regions which results in a very significant improvement of the discriminative performance compared to just using the full face. © 2015 American Academy of Forensic Sciences.

  17. Magnetic resonance imaging of facial nerve schwannoma.

    PubMed

    Thompson, Andrew L; Aviv, Richard I; Chen, Joseph M; Nedzelski, Julian M; Yuen, Heng-Wai; Fox, Allan J; Bharatha, Aditya; Bartlett, Eric S; Symons, Sean P

    2009-12-01

    This study characterizes the magnetic resonance (MR) appearances of facial nerve schwannoma (FNS). We hypothesize that the extent of FNS demonstrated on MR will be greater compared to prior computed tomography studies, that geniculate involvement will be most common, and that cerebellar pontine angle (CPA) and internal auditory canal (IAC) involvement will more frequently result in sensorineural hearing loss (SNHL). Retrospective study. Clinical, pathologic, and enhanced MR imaging records of 30 patients with FNS were analyzed. Morphologic characteristics and extent of segmental facial nerve involvement were documented. Median age at initial imaging was 51 years (range, 28-76 years). Pathologic confirmation was obtained in 14 patients (47%), and the diagnosis reached in the remainder by identification of a mass, thickening, and enhancement along the course of the facial nerve. All 30 lesions involved two or more contiguous segments of the facial nerve, with 28 (93%) involving three or more segments. The median segments involved per lesion was 4, mean of 3.83. Geniculate involvement was most common, in 29 patients (97%). CPA (P = .001) and IAC (P = .02) involvement was significantly related to SNHL. Seventeen patients (57%) presented with facial nerve dysfunction, manifesting in 12 patients as facial nerve weakness or paralysis, and/or in eight with involuntary movements of the facial musculature. This study highlights the morphologic heterogeneity and typical multisegment involvement of FNS. Enhanced MR is the imaging modality of choice for FNS. The neuroradiologist must accurately diagnose and characterize this lesion, and thus facilitate optimal preoperative planning and counseling.

  18. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    ERIC Educational Resources Information Center

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  19. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    PubMed

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  20. High-resolution face verification using pore-scale facial features.

    PubMed

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose.

  1. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  2. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  3. Is moral beauty different from facial beauty? Evidence from an fMRI study.

    PubMed

    Wang, Tingting; Mo, Lei; Mo, Ce; Tan, Li Hai; Cant, Jonathan S; Zhong, Luojin; Cupchik, Gerald

    2015-06-01

    Is moral beauty different from facial beauty? Two functional magnetic resonance imaging experiments were performed to answer this question. Experiment 1 investigated the network of moral aesthetic judgments and facial aesthetic judgments. Participants performed aesthetic judgments and gender judgments on both faces and scenes containing moral acts. The conjunction analysis of the contrasts 'facial aesthetic judgment > facial gender judgment' and 'scene moral aesthetic judgment > scene gender judgment' identified the common involvement of the orbitofrontal cortex (OFC), inferior temporal gyrus and medial superior frontal gyrus, suggesting that both types of aesthetic judgments are based on the orchestration of perceptual, emotional and cognitive components. Experiment 2 examined the network of facial beauty and moral beauty during implicit perception. Participants performed a non-aesthetic judgment task on both faces (beautiful vs common) and scenes (containing morally beautiful vs neutral information). We observed that facial beauty (beautiful faces > common faces) involved both the cortical reward region OFC and the subcortical reward region putamen, whereas moral beauty (moral beauty scenes > moral neutral scenes) only involved the OFC. Moreover, compared with facial beauty, moral beauty spanned a larger-scale cortical network, indicating more advanced and complex cerebral representations characterizing moral beauty. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  5. Virtual transplantation in designing a facial prosthesis for extensive maxillofacial defects that cross the facial midline using computer-assisted technology.

    PubMed

    Feng, Zhi-hong; Dong, Yan; Bai, Shi-zhu; Wu, Guo-feng; Bi, Yun-peng; Wang, Bo; Zhao, Yi-min

    2010-01-01

    The aim of this article was to demonstrate a novel approach to designing facial prostheses using the transplantation concept and computer-assisted technology for extensive, large, maxillofacial defects that cross the facial midline. The three-dimensional (3D) facial surface images of a patient and his relative were reconstructed using data obtained through optical scanning. Based on these images, the corresponding portion of the relative's face was transplanted to the patient's where the defect was located, which could not be rehabilitated using mirror projection, to design the virtual facial prosthesis without the eye. A 3D model of an artificial eye that mimicked the patient's remaining one was developed, transplanted, and fit onto the virtual prosthesis. A personalized retention structure for the artificial eye was designed on the virtual facial prosthesis. The wax prosthesis was manufactured through rapid prototyping, and the definitive silicone prosthesis was completed. The size, shape, and cosmetic appearance of the prosthesis were satisfactory and matched the defect area well. The patient's facial appearance was recovered perfectly with the prosthesis, as determined through clinical evaluation. The optical 3D imaging and computer-aided design/computer-assisted manufacturing system used in this study can design and fabricate facial prostheses more precisely than conventional manual sculpturing techniques. The discomfort generally associated with such conventional methods was decreased greatly. The virtual transplantation used to design the facial prosthesis for the maxillofacial defect, which crossed the facial midline, and the development of the retention structure for the eye were both feasible.

  6. The asymmetric facial skin perfusion distribution of Bell's palsy discovered by laser speckle imaging technology.

    PubMed

    Cui, Han; Chen, Yi; Zhong, Weizheng; Yu, Haibo; Li, Zhifeng; He, Yuhai; Yu, Wenlong; Jin, Lei

    2016-01-01

    Bell's palsy is a kind of peripheral neural disease that cause abrupt onset of unilateral facial weakness. In the pathologic study, it was evidenced that ischemia of facial nerve at the affected side of face existed in Bell's palsy patients. Since the direction of facial nerve blood flow is primarily proximal to distal, facial skin microcirculation would also be affected after the onset of Bell's palsy. Therefore, monitoring the full area of facial skin microcirculation would help to identify the condition of Bell's palsy patients. In this study, a non-invasive, real time and full field imaging technology - laser speckle imaging (LSI) technology was applied for measuring facial skin blood perfusion distribution of Bell's palsy patients. 85 participants with different stage of Bell's palsy were included. Results showed that Bell's palsy patients' facial skin perfusion of affected side was lower than that of the normal side at the region of eyelid, and that the asymmetric distribution of the facial skin perfusion between two sides of eyelid is positively related to the stage of the disease (P <  0.001). During the recovery, the perfusion of affected side of eyelid was increasing to nearly the same with the normal side. This study was a novel application of LSI in evaluating the facial skin perfusion of Bell's palsy patients, and we discovered that the facial skin blood perfusion could reflect the stage of Bell's palsy, which suggested that microcirculation should be investigated in patients with this neurological deficit. It was also suggested LSI as potential diagnostic tool for Bell's palsy.

  7. Real-time face and gesture analysis for human-robot interaction

    NASA Astrophysics Data System (ADS)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  8. The effects of a daily facial lotion containing vitamins B3 and E and provitamin B5 on the facial skin of Indian women: a randomized, double-blind trial.

    PubMed

    Jerajani, Hemangi R; Mizoguchi, Haruko; Li, James; Whittenbarger, Debora J; Marmor, Michael J

    2010-01-01

    The B vitamins niacinamide and panthenol have been shown to reduce many signs of skin aging, including hyperpigmentation and redness. To measure the facial skin effects in Indian women of the daily use of a lotion containing niacinamide, panthenol, and tocopherol acetate using quantitative image analysis. Adult women 30-60 years of age with epidermal hyperpigmentation were recruited in Mumbai and randomly assigned to apply a test or control lotion to the face daily for 10 weeks. Effects on skin tone were measured using an image capturing system and associated software. Skin texture was assessed by expert graders. Barrier function was evaluated by transepithelial water loss measurements. Subjects and evaluators were blinded to the product assignment. Of 246 women randomized to treatment, 207 (84%) completed the study. Women who used the test lotion experienced significantly reduced appearance of hyperpigmentation, improved skin tone evenness, appearance of lightening of skin, and positive effects on skin texture. Improvements versus control were seen as early as 6 weeks. The test lotion was well tolerated. The most common adverse event was a transient, mild burning sensation. Daily use of a facial lotion containing niacinamide, panthenol, and tocopheryl acetate improved skin tone and texture and was well tolerated in Indian women with facial signs of aging.

  9. Regional facial asymmetries and attractiveness of the face.

    PubMed

    Kaipainen, Anu E; Sieber, Kevin R; Nada, Rania M; Maal, Thomas J; Katsaros, Christos; Fudalej, Piotr S

    2016-12-01

    Facial attractiveness is an important factor in our social interactions. It is still not entirely clear which factors influence the attractiveness of a face and facial asymmetry appears to play a certain role. The aim of the present study was to assess the association between facial attractiveness and regional facial asymmetries evaluated on three-dimensional (3D) images. 3D facial images of 59 (23 male, 36 female) young adult patients (age 16-25 years) before orthodontic treatment were evaluated for asymmetry. The same 3D images were presented to 12 lay judges who rated the attractiveness of each subject on a 100mm visual analogue scale. Reliability of the method was assessed with Bland-Altman plots and Cronbach's alpha coefficient. All subjects showed a certain amount of asymmetry in all regions of the face; most asymmetry was found in the chin and cheek areas and less in the lip, nose and forehead areas. No statistically significant differences in regional facial asymmetries were found between male and female subjects (P > 0.05). Regression analyses demonstrated that the judgement of facial attractiveness was not influenced by absolute regional facial asymmetries when gender, facial width-to-height ratio and type of malocclusion were controlled (P > 0.05). A potential limitation of the study could be that other biologic and cultural factors influencing the perception of facial attractiveness were not controlled for. A small amount of asymmetry was present in all subjects assessed in this study, and asymmetry of this magnitude may not influence the assessment of facial attractiveness. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  10. Eigen-disfigurement model for simulating plausible facial disfigurement after reconstructive surgery.

    PubMed

    Lee, Juhun; Fingeret, Michelle C; Bovik, Alan C; Reece, Gregory P; Skoracki, Roman J; Hanasono, Matthew M; Markey, Mia K

    2015-03-27

    Patients with facial cancers can experience disfigurement as they may undergo considerable appearance changes from their illness and its treatment. Individuals with difficulties adjusting to facial cancer are concerned about how others perceive and evaluate their appearance. Therefore, it is important to understand how humans perceive disfigured faces. We describe a new strategy that allows simulation of surgically plausible facial disfigurement on a novel face for elucidating the human perception on facial disfigurement. Longitudinal 3D facial images of patients (N = 17) with facial disfigurement due to cancer treatment were replicated using a facial mannequin model, by applying Thin-Plate Spline (TPS) warping and linear interpolation on the facial mannequin model in polar coordinates. Principal Component Analysis (PCA) was used to capture longitudinal structural and textural variations found within each patient with facial disfigurement arising from the treatment. We treated such variations as disfigurement. Each disfigurement was smoothly stitched on a healthy face by seeking a Poisson solution to guided interpolation using the gradient of the learned disfigurement as the guidance field vector. The modeling technique was quantitatively evaluated. In addition, panel ratings of experienced medical professionals on the plausibility of simulation were used to evaluate the proposed disfigurement model. The algorithm reproduced the given face effectively using a facial mannequin model with less than 4.4 mm maximum error for the validation fiducial points that were not used for the processing. Panel ratings of experienced medical professionals on the plausibility of simulation showed that the disfigurement model (especially for peripheral disfigurement) yielded predictions comparable to the real disfigurements. The modeling technique of this study is able to capture facial disfigurements and its simulation represents plausible outcomes of reconstructive surgery for facial cancers. Thus, our technique can be used to study human perception on facial disfigurement.

  11. Biometric Fusion Demonstration System Scientific Report

    DTIC Science & Technology

    2004-03-01

    verification and facial recognition , searching watchlist databases comprised of full or partial facial images or voice recordings. Multiple-biometric...17 2.2.1.1 Fingerprint and Facial Recognition ............................... 17...iv DRDC Ottawa CR 2004 – 056 2.2.1.2 Iris Recognition and Facial Recognition ........................ 18

  12. Facial discrimination in body dysmorphic, obsessive-compulsive and social anxiety disorders.

    PubMed

    Hübner, Claudia; Wiesendahl, Wiebke; Kleinstäuber, Maria; Stangier, Ulrich; Kathmann, Norbert; Buhlmann, Ulrike

    2016-02-28

    Body dysmorphic disorder (BDD) is characterized by preoccupation with perceived flaws in one's own appearance. Several risk factors such as aesthetic perceptual sensitivity have been proposed to explain BDD's unique symptomatology. Although research on facial discrimination is limited so far, the few existing studies have produced mixed results. Thus, the purpose of this study was to further examine facial discrimination in BDD. We administered a facial discrimination paradigm, which allows to assess the ability to identify slight to strong facial changes (e.g., hair loss, acne) when presented with an original (unmodified) facial image, relative to a changed (modified) facial image. The experiment was administered in individuals with BDD, social anxiety disorder, obsessive-compulsive disorder, and mentally healthy controls (32 per group, respectively). Overall, groups did not differ with respect to their ability to correctly identify facial aberrations when presented with other people's faces. Our findings do not support the hypothesis of enhanced general aesthetic perceptual sensitivity in individuals with (vs. without) BDD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Orthodontic soft-tissue parameters: a comparison of cone-beam computed tomography and the 3dMD imaging system.

    PubMed

    Metzger, Tasha E; Kula, Katherine S; Eckert, George J; Ghoneima, Ahmed A

    2013-11-01

    Orthodontists rely heavily on soft-tissue analysis to determine facial esthetics and treatment stability. The aim of this retrospective study was to determine the equivalence of soft-tissue measurements between the 3dMD imaging system (3dMD, Atlanta, Ga) and the segmented skin surface images derived from cone-beam computed tomography. Seventy preexisting 3dMD facial photographs and cone-beam computed tomography scans taken within minutes of each other for the same subjects were registered in 3 dimensions and superimposed using Vultus (3dMD) software. After reliability studies, 28 soft-tissue measurements were recorded with both imaging modalities and compared to analyze their equivalence. Intraclass correlation coefficients and Bland-Altman plots were used to assess interexaminer and intraexaminer repeatability and agreement. Summary statistics were calculated for all measurements. To demonstrate equivalence of the 2 methods, the difference needed a 95% confidence interval contained entirely within the equivalence limits defined by the repeatability results. Statistically significant differences were reported for the vermilion height, mouth width, total facial width, mouth symmetry, soft-tissue lip thickness, and eye symmetry. There are areas of nonequivalence between the 2 imaging methods; however, the differences are clinically acceptable from the orthodontic point of view. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  14. Facial asymmetry and condylar hyperplasia: considerations for diagnosis in 27 consecutives patients

    PubMed Central

    Olate, Sergio; Almeida, Andrés; Alister, Juan Pablo; Navarro, Pablo; Netto, Henrique Duque; de Moraes, Márcio

    2013-01-01

    Facial asymmetry associated with condylar hyperplasia (CH) has been become the object of study in recent years. The aim of this study is to demonstrate the importance of analyzing the presence of CH in cases of facial asymmetry. Twenty-seven consecutive patients were studied without distinction of age or gender; all the patients consulted for treatment of facial and/or mandibular asymmetry and voluntarily agreed to participate in the study. All the patients underwent facial cone beam tomography and bilateral TMJ as well as a detailed history where they indicated the progression of the disease; in cases of active evolution determined by clinical analysis and imaging, a SPECT analysis was performed to define the isotope uptake. 29.6% of the subjects with scintigram exhibited active CH with a more than 10% difference in uptake between the two condyles; 18.5% presented differences in uptake between 5% and 10%. Active CH was related to the age and gender of the subjects, being more prevalent in women than in men. The aggression level of the uptake was also related to the subject’s age. 55% of the subjects presented with some type of orthodontic treatment with no diagnosis of TMJ pathology in the initial consultation. It can be concluded that CH is associated with facial asymmetries and must be studied integrally before assessing treatment options. PMID:24260600

  15. New method for analysis of facial growth in a pediatric reconstructed mandible.

    PubMed

    Kau, Chung How; Kamel, Sherif Galal; Wilson, Jim; Wong, Mark E

    2011-04-01

    The aim of this article was to present a new method of analysis for the assessment of facial growth and morphology after surgical resection of the mandible in a growing patient. This was a 2-year longitudinal study of facial growth in a child who had undergone segmental resection of the mandible with immediate reconstruction as a treatment for juvenile aggressive fibromatosis. Three-dimensional digital stereo-photogrammteric cameras were used for image acquisition at several follow-up intervals: immediate, 6 months, and 2 years postresection. After processing and superimposition, shell-to-shell deviation maps were used for the analysis of the facial growth pattern and its deviation from normal growth. The changes were seen as mean surface changes and color maps. An average constructed female face from a previous study was used as a reference for a normal growth pattern. The patient showed significant growth during this period. Positive changes took place around the nose, lateral brow area, and lower lip and chin, whereas negative changes were evident at the lower lips and cheeks area. An increase in the vertical dimension of the face at the chin region was also seen prominently. Three-dimensional digital stereo-photogrammetry can be used as an objective, noninvasive method for quantifying and monitoring facial growth and its abnormalities. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  16. Rating Nasolabial Aesthetics in Unilateral Cleft Lip and Palate Patients: Cropped Versus Full-Face Images.

    PubMed

    Schwirtz, Roderic M F; Mulder, Frans J; Mosmuller, David G M; Tan, Robin A; Maal, Thomas J; Prahl, Charlotte; de Vet, Henrica C W; Don Griot, J Peter W

    2018-05-01

    To determine if cropping facial images affects nasolabial aesthetics assessments in unilateral cleft lip patients and to evaluate the effect of facial attractiveness on nasolabial evaluation. Two cleft surgeons and one cleft orthodontist assessed standardized frontal photographs 4 times; nasolabial aesthetics were rated on cropped and full-face images using the Cleft Aesthetic Rating Scale, and total facial attractiveness was rated on full-face images with and without the nasolabial area blurred using a 5-point Likert scale. Cleft Palate Craniofacial Unit of a University Medical Center. Inclusion criteria: nonsyndromic unilateral cleft lip and an available frontal view photograph around 10 years of age. a history of facial trauma and an incomplete cleft. Eighty-one photographs were available for assessment. Differences in mean CARS scores between cropped versus full-face photographs and attractive versus unattractive rated patients were evaluated by paired t test. Nasolabial aesthetics are scored more negatively on full-face photographs compared to cropped photographs, regardless of facial attractiveness. (Mean CARS score, nose: cropped = 2.8, full-face = 3.0, P < .001; lip: cropped = 2.4, full-face = 2.7, P < .001; nose and lip: cropped = 2.6, full-face = 2.8, P < .001). Aesthetic outcomes of the nasolabial area are assessed significantly more positively when using cropped images compared to full-face images. For this reason, cropping images, revealing the nasolabial area only, is recommended for aesthetical assessments.

  17. Facial expression recognition under partial occlusion based on fusion of global and local features

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  18. What's behind the mask? A look at blood flow changes with prolonged facial pressure and expression using laser Doppler imaging.

    PubMed

    Van-Buendia, Lan B; Allely, Rebekah R; Lassiter, Ronald; Weinand, Christian; Jordan, Marion H; Jeng, James C

    2010-01-01

    Clinically, the initial blanching in burn scar seen on transparent plastic face mask application seems to diminish with time and movement requiring mask alteration. To date, studies quantifying perfusion with prolonged mask use do not exist. This study used laser Doppler imaging (LDI) to assess perfusion through the transparent face mask and movement in subjects with and without burn over time. Five subjects fitted with transparent face masks were scanned with the LDI on four occasions. The four subjects without burn were scanned in the following manner: 1) no mask, 2) mask on while at rest, 3) mask on with alternating intervals of sustained facial expression and rest, and 4) after mask removal. Images were acquired every 3 minutes throughout the 85-minute study period. The subject with burn underwent a shortened scanning protocol to increase comfort. Each face was divided into five regions of interest for analysis. Compared with baseline, mask application decreased perfusion significantly in all subjects (P < .0001). Perfusion did not change during the rest period. There were no significant differences with changing facial expression in any of the regions of interest. On mask removal, all regions of the face demonstrated a hyperemic effect with the chin (P = .05) and each cheek (P < .0001) reaching statistical significance. Perfusion levels did not return to baseline in the chin and cheeks after 30 minutes of mask removal. Perfusions remain constantly low while wearing the face mask, despite changing facial expressions. Changing facial expressions with the mask on did not alter perfusion. Hyperemic response occurs on removal of the mask. This study exposed methodology and statistical issues worth considering when conducting future research with the face, pressure therapy, and with LDI technology.

  19. Visible skin colouration predicts perception of male facial age, health and attractiveness.

    PubMed

    Fink, B; Bunse, L; Matts, P J; D'Emiliano, D

    2012-08-01

    Although there is evidence that perception of facial age, health and attractiveness is informed by shape characteristics as well as by visible skin condition, studies on the latter have focused almost exclusively on female skin. Recent research, however, suggests that a decrease in skin colour homogeneity leads to older, less healthy and less attractive ratings of facial skin in both women and men. Here, we elaborate on the significance of the homogeneity of visible skin colouration in men by testing the hypothesis that perception of age, health and attractiveness of (non-contextual) digitally isolated fields of cheek skin only can predict that of whole facial images. Facial digital images of 160 British men (all Caucasian) aged between 10 and 70 were blind-rated for age, health and attractiveness by a total of 147 men and 154 women (mean age = 22.95, SD = 4.26), and these ratings were related to those of corresponding images of cheek skin reported by Fink et al. (J. Eur. Acad. Dermatol. Venereol. in press). Linear regression analysis showed that age, health and attractiveness perception of men's faces could be predicted by the ratings of cheek skin only, such that older men were viewed as older, less healthy and less attractive. This result underlines once again the potent signalling role of skin in its own right, independent of shape or other factors and suggests strongly that visible skin condition, and skin colour homogeneity in particular, plays a significant role in the perception of men's faces. © 2012 The Authors. ICS © 2012 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  20. CT detection of facial canal dehiscence and semicircular canal fistula: Comparison with surgical findings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuse, Takeo; Tada, Yuichiro; Aoyagi, Masaru

    1996-03-01

    The purpose of this study was to determine the accuracy of high resolution CT (HRCT) in the detection of facial canal dehiscence and semicircular canal fistula, the preoperative evaluation of both of which is clinically very important for ear surgery. We retrospectively reviewed the HRCT findings in 61 patients who underwent mastoidectomy at Yamagata University between 1989 and 1993. The HRCT images were obtained in the axial and semicoronal planes using 1 mm slice thickness and 1 mm intersection gap. In 46 (75%) of the 61 patients, the HRCT image-based assessment of the facial canal dehiscence coincided with the surgicalmore » findings. The data for the facial canal revealed sensitivity of 66% and specificity of 84%. For semicircular canal fistula. in 59 (97%) of the 61 patients, the HRCT image-based assessment and the surgical findings coincided. The image-based assessment in the remaining two patients, who both had massive cholesteatoma, was false-positive. HRCT is useful in the diagnosis of facial canal dehiscence and labyrinthine fistula, but its limitations should also be recognized. 12 refs., 3 figs., 6 tabs.« less

  1. Digital Image Speckle Correlation for the Quantification of the Cosmetic Treatment with Botulinum Toxin Type A (BTX-A)

    NASA Astrophysics Data System (ADS)

    Bhatnagar, Divya; Conkling, Nicole; Rafailovich, Miriam; Dagum, Alexander

    2012-02-01

    The skin on the face is directly attached to the underlying muscles. Here, we successfully introduce a non-invasive, non-contact technique, Digital Image Speckle Correlation (DISC), to measure the precise magnitude and duration of facial muscle paralysis inflicted by BTX-A. Subjective evaluation by clinicians and patients fail to objectively quantify the direct effect and duration of BTX-A on the facial musculature. By using DISC, we can (a) Directly measure deformation field of the facial skin and determine the locus of facial muscular tension(b)Quantify and monitor muscular paralysis and subsequent re-innervation following injection; (c) Continuously correlate the appearance of wrinkles and muscular tension. Two sequential photographs of slight facial motion (frowning, raising eyebrows) are taken. DISC processes the images to produce a vector map of muscular displacement from which spatially resolved information is obtained regarding facial tension. DISC can track the ability of different muscle groups to contract and can be used to predict the site of injection, quantify muscle paralysis and the rate of recovery following BOTOX injection.

  2. A three-dimensional look for facial differences between males and females in a British-Caucasian sample aged 151/2 years old.

    PubMed

    Toma, A M; Zhurov, A; Playle, R; Richmond, S

    2008-08-01

    Optical surface scanning accurately records the three-dimension (3D) shape of the face non-invasively. Many software programs have been developed to process and analyze the 3D data, enabling the clinicians to create average templates for groups of subjects to provide a comparison of facial shape. Differences in facial morphology of males and females were identified using a laser scan imaging technology. This study was undertaken on 380 British-Caucasian children aged 15 and a half year old, recruited from the Avon Longitudinal Study of Parents and Children (ALSPAC). 3D facial images were obtained for each individual using two high resolution Konica/Minolta laser scanners. The scan quality was assessed and any unsuitable scans were excluded from the study. Average facial templates were created for males and females, and a registration technique was used to superimpose the facial shells of males and females so that facial differences can be quantified. Thirty unsuitable scans were excluded from the study. The final sample consisted of 350 subjects (166 females, 184 males). Females tend to have more prominent eyes and cheeks in relation to males with a maximum difference of 2.4 mm. Males tend to have more prominent noses and mouths with a maximum difference of 2.7 mm. About 31% of the facial shells match exactly (no difference), mainly in the forehead and chin regions of the face. Differences in facial morphology can be accurately quantified and visualized using 3D imaging technology. This method of facial assessment can be recommended and applied for future research studies to assess facial soft tissue changes because of growth or healthcare intervention.

  3. Facial morphologies of an adult Egyptian population and an adult Houstonian white population compared using 3D imaging.

    PubMed

    Seager, Dennis Craig; Kau, Chung How; English, Jeryl D; Tawfik, Wael; Bussa, Harry I; Ahmed, Abou El Yazeed M

    2009-09-01

    To compare the facial morphologies of an adult Egyptian population with those of a Houstonian white population. The three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMDface System photographed 186 subjects from two population groups (Egypt and Houston). All of the participants from both population groups were between 18 and 30 years of age and had no apparent facial anomalies. All facial images were overlaid and superimposed, and a complex mathematical algorithm was performed to generate a composite facial average (one male and one female) for each subgroup (EGY-M: Egyptian male subjects; EGY-F: Egyptian female subjects; HOU-M: Houstonian male subjects; and HOU-F: Houstonian female subjects). The computer-generated facial averages were superimposed based on a previously validated superimposition method, and the facial differences were evaluated and quantified. Distinct facial differences were evident between the subgroups evaluated, involving various regions of the face including the slant of the forehead, and the nasal, malar, and labial regions. Overall, the mean facial differences between the Egyptian and Houstonian female subjects were 1.33 +/- 0.93 mm, while the differences in Egyptian and Houstonian male subjects were 2.32 +/- 2.23 mm. The range of differences for the female population pairings and the male population pairings were 14.34 mm and 13.71 mm, respectively. The average adult Egyptian and white Houstonian face possess distinct differences. Different populations and ethnicities have different facial features and averages.

  4. Observers' response to facial disfigurement from head and neck cancer.

    PubMed

    Cho, Joowon; Fingeret, Michelle Cororve; Huang, Sheng-Cheng; Liu, Jun; Reece, Gregory P; Markey, Mia K

    2018-05-30

    Our long-term goal is to develop a normative feedback intervention to support head and neck cancer patients in forming realistic expectations about how other people in non-social group settings will respond to their appearance. This study aimed to evaluate the relationship between observer ratings of facial disfigurement and observer ratings of emotional response when viewing photographs of faces of head and neck cancer patients. Seventy-five (75) observers rated their emotional response to each of 144 facial photographs of head and neck cancer patients using the Self-Assessment-Manikin and rated severity of facial disfigurement on a 9-point scale. Body image investment of the observers was measured using the Appearance Schemas Inventory-Revised. A standardized multiple regression model was used to assess the relationship between observer ratings of facial disfigurement and observer ratings of emotional response, taking into consideration the age and sex of the patient depicted in the stimulus photograph, as well as the age, sex, and body image investment of the observer. Observers who had a strong emotional response to a patient's facial photograph tended to rate the patient's facial disfigurement as more severe (standardized regression coefficient β = 0.328, P < 0.001). Sex and age of the observer had more influence on the rating of facial disfigurement than did the patient's demographic characteristics. Observers more invested in their own body image tended to rate the facial disfigurement as more severe. This study lays the groundwork for a normative database of emotional response to facial disfigurement. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Enhanced facial recognition for thermal imagery using polarimetric imaging.

    PubMed

    Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W

    2014-07-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.

  6. Factors Influencing Perception of Facial Attractiveness: Gender and Dental Education.

    PubMed

    Jung, Ga-Hee; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Kook, Min-Suk

    2018-03-01

    This study was conducted to investigate the gender- and dental education-specific differences in perception of facial attractiveness for varying ratio of lower face contour. Two hundred eleven students (110 male respondents and 110 female respondents; aged between 20-38 years old) were requested to rate facial figures with alterations to the bigonial width and the vertical length of the lower face. We produced a standard figure which is based on the "golden ratio" and 4 additional series of figures with either horizontal or vertical alterations to the contour of lower face. The preference for each figure was evaluated using a Visual Analog Scale. The Kruskal Wallis test was used for differences in the preferences for each figure and the Mann-Whitney U test was used to evaluate gender-specific differences and differences by dental education. In general, the highest preference score was indicated for the standard figure, whereas facial figure with large bigonial width and chin length had the lowest score.Male respondents showed significantly higher preference score for facial contour that had a 0.1 proportional increase in the facial height-bigonial width ratio over that of the standard figure.For horizontal alterations to the facial profiles, there were no significant differences in the preferences by the level of dental education. For vertically altered images, the average Visual Analog Scale was significantly lower among the dentally-educated for facial image that had a proportional 0.22 and 0.42 increase in the ratio between the vertical length of the chin and the lip. Generally, the standard image based on the golden ratio was the most. Slender face was appealed more to males than to females, and facial image with an increased lower facial height were perceived to be much less attractive to the dentally-educated respondents, which suggests that the dental education might have some influence in sensitivity to vertical changes in lower face.

  7. Resource Allocation in Dynamic Environments

    DTIC Science & Technology

    2012-10-01

    Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to

  8. Personality judgments from everyday images of faces

    PubMed Central

    Sutherland, Clare A. M.; Rowley, Lauren E.; Amoaku, Unity T.; Daguzan, Ella; Kidd-Rossiter, Kate A.; Maceviciute, Ugne; Young, Andrew W.

    2015-01-01

    People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying “ambient image” face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability, and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling. PMID:26579008

  9. Photo anthropometric variations in Japanese facial features: Establishment of large-sample standard reference data for personal identification using a three-dimensional capture system.

    PubMed

    Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K

    2015-12-01

    This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The measurements will be useful as standard reference data for forensic practices and as material data for future studies in this field. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Facial Palsy Following Embolization of a Juvenile Nasopharyngeal Angiofibroma.

    PubMed

    Tawfik, Kareem O; Harmon, Jeffrey J; Walters, Zoe; Samy, Ravi; de Alarcon, Alessandro; Stevens, Shawn M; Abruzzo, Todd

    2018-05-01

    To describe a case of the rare complication of facial palsy following preoperative embolization of a juvenile nasopharyngeal angiofibroma (JNA). To illustrate the vascular supply to the facial nerve and as a result, highlight the etiology of the facial nerve palsy. The angiography and magnetic resonance (MR) imaging of a case of facial palsy following preoperative embolization of a JNA is reviewed. A 13-year-old male developed left-sided facial palsy following preoperative embolization of a left-sided JNA. Evaluation of MR imaging studies and retrospective review of the angiographic data suggested errant embolization of particles into the petrosquamosal branch of the middle meningeal artery (MMA), a branch of the internal maxillary artery (IMA), through collateral vasculature. The petrosquamosal branch of the MMA is the predominant blood supply to the facial nerve in the facial canal. The facial palsy resolved since complete infarction of the nerve was likely prevented by collateral blood supply from the stylomastoid artery. Facial palsy is a potential complication of embolization of the IMA, a branch of the external carotid artery (ECA). This is secondary to ischemia of the facial nerve due to embolization of its vascular supply. Clinicians should be aware of this potential complication and counsel patients accordingly prior to embolization for JNA.

  11. Three-dimensional assessment of facial asymmetry: A systematic review.

    PubMed

    Akhil, Gopi; Senthil Kumar, Kullampalayam Palanisamy; Raja, Subramani; Janardhanan, Kumaresan

    2015-08-01

    For patients with facial asymmetry, complete and precise diagnosis, and surgical treatments to correct the underlying cause of the asymmetry are significant. Conventional diagnostic radiographs (submento-vertex projections, posteroanterior radiography) have limitations in asymmetry diagnosis due to two-dimensional assessments of three-dimensional (3D) images. The advent of 3D images has greatly reduced the magnification and projection errors that are common in conventional radiographs making it as a precise diagnostic aid for assessment of facial asymmetry. Thus, this article attempts to review the newly introduced 3D tools in the diagnosis of more complex facial asymmetries.

  12. Functional connectivity between amygdala and facial regions involved in recognition of facial threat

    PubMed Central

    Harada, Tokiko; Ruffman, Ted; Sadato, Norihiro; Iidaka, Tetsuya

    2013-01-01

    The recognition of threatening faces is important for making social judgments. For example, threatening facial features of defendants could affect the decisions of jurors during a trial. Previous neuroimaging studies using faces of members of the general public have identified a pivotal role of the amygdala in perceiving threat. This functional magnetic resonance imaging study used face photographs of male prisoners who had been convicted of first-degree murder (MUR) as threatening facial stimuli. We compared the subjective ratings of MUR faces with those of control (CON) faces and examined how they were related to brain activation, particularly, the modulation of the functional connectivity between the amygdala and other brain regions. The MUR faces were perceived to be more threatening than the CON faces. The bilateral amygdala was shown to respond to both MUR and CON faces, but subtraction analysis revealed no significant difference between the two. Functional connectivity analysis indicated that the extent of connectivity between the left amygdala and the face-related regions (i.e. the superior temporal sulcus, inferior temporal gyrus and fusiform gyrus) was correlated with the subjective threat rating for the faces. We have demonstrated that the functional connectivity is modulated by vigilance for threatening facial features. PMID:22156740

  13. 32 CFR 161.7 - ID card life-cycle procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... provide two fingerprint biometric scans and a facial image, to assist with authenticating the applicant's... manner: (i) A digitized, full-face passport-type photograph will be captured for the facial image and stored in DEERS and shall have a plain white or off-white background. No flags, posters, or other images...

  14. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  15. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  16. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  17. The Usefulness of MR Imaging of the Temporal Bone in the Evaluation of Patients with Facial and Audiovestibular Dysfunction

    PubMed Central

    Park, Sang Uk; Cho, Young Kuk; Lim, Myung Kwan; Kim, Won Hong; Suh, Chang Hae; Lee, Seung Chul

    2002-01-01

    Objective To evaluate the clinical utility of MR imaging of the temporal bone in patients with facial and audiovestibular dysfunction with particular emphasis on the importance of contrast enhancement. Materials and Methods We retrospectively reviewed the MR images of 179 patients [72 men, 107 women; average age, 44 (range, 1-77) years] who presented with peripheral facial palsy (n=15), audiometrically proven sensorineural hearing loss (n=104), vertigo (n=109), or tinnitus (n=92). Positive MR imaging findings possibly responsible for the patients clinical manifestations were categorized according to the anatomic sites and presumed etiologies of the lesions. We also assessed the utility of contrast-enhanced MR imaging by analyzing its contribution to the demonstration of lesions which would otherwise not have been apparent. All MR images were interpreted by two neuroradiologists, who reached their conclusions by consensus. Results MR images demonstrated positive findings, thought to account for the presenting symptoms, in 78 (44%) of 179 patients, including 15 (100%) of 15 with peripheral facial palsy, 43 (41%) of 104 with sensorineural hearing loss, 40 (37%) of 109 with vertigo, and 39 (42%) of 92 with tinnitus. Thirty (38%) of those 78 patients had lesions that could be confidently recognized only at contrast-enhanced MR imaging. Conclusion Even though its use led to positive findings in less than half of these patients, MR imaging of the temporal bone is a useful diagnostic procedure in the evaluation of those with facial and audiovestibular dysfunction. Because it was only at contrast-enhanced MR imaging that a significant number of patients showed positive imaging findings which explained their clinical manifestations, the use of contrast material is highly recommended. PMID:11919474

  18. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders.

    PubMed

    Sato, Wataru; Toichi, Motomi; Uono, Shota; Kochiyama, Takanori

    2012-08-13

    Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD.We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex-MTG-IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  19. Ideal proportions in full face front view, contemporary versus antique.

    PubMed

    Mommaerts, M Y; Moerenhout, B A M M L

    2011-03-01

    To compare the facial proportions of contemporary harmonious faces with those of antiquity, to validate classical canons and to determine new ones useful in orthofacial surgery planning. Contemporary beautiful faces were retrieved from yearly polls of People Magazine and FHM. Selected B/W frontal facial photographs of 31 men and 74 women were ranked by 20 patients who had to undergo orthofacial surgery. The top-15 female faces and the top-10 male faces were analyzed with Scion Image software. The classical facial index, the Bruges facial index, the ratio lower facial height/total facial height and the vertical tri-partite of the lower face were calculated. The same analysis was done on pictures of classical sculptures representing seven goddesses and 12 gods. Harmonious contemporary female faces have a significantly lower classical facial index, indicating that facial height is less or facial width is larger than in male and even than in antique female faces. The Bruges index indicates a similar difference between ideal contemporary female and male faces. The contemporary male has a higher lower face (48%) compared to total facial height than the contemporary female (45%), although this is statistically not significant (P=0.08). The lower facial thirds index remained quite stabile for 2500 years, without gender difference. A good canon for both sexes today is stomion-gnathion being 70% of subnasale-stomion. The average ideal contemporary female face is shorter than the male face, given the fact that interpupillary distance is similar. The Vitruvian thirds in the lower face have to be adjusted to a 30% upper lip, 70% lower lip-chin proportion. The contemporary ideal ratios are suitable to be implemented in an orthofacial planning concept. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Validation of image analysis techniques to measure skin aging features from facial photographs.

    PubMed

    Hamer, M A; Jacobs, L C; Lall, J S; Wollstein, A; Hollestein, L M; Rae, A R; Gossage, K W; Hofman, A; Liu, F; Kayser, M; Nijsten, T; Gunn, D A

    2015-11-01

    Accurate measurement of the extent skin has aged is crucial for skin aging research. Image analysis offers a quick and consistent approach for quantifying skin aging features from photographs, but is prone to technical bias and requires proper validation. Facial photographs of 75 male and 75 female North-European participants, randomly selected from the Rotterdam Study, were graded by two physicians using photonumeric scales for wrinkles (full face, forehead, crow's feet, nasolabial fold and upper lip), pigmented spots and telangiectasia. Image analysis measurements of the same features were optimized using photonumeric grades from 50 participants, then compared to photonumeric grading in the 100 remaining participants stratified by sex. The inter-rater reliability of the photonumeric grades was good to excellent (intraclass correlation coefficients 0.65-0.93). Correlations between the digital measures and the photonumeric grading were moderate to excellent for all the wrinkle comparisons (Spearman's rho ρ = 0.52-0.89) bar the upper lip wrinkles in the men (fair, ρ = 0.30). Correlations were moderate to good for pigmented spots and telangiectasia (ρ = 0.60-0.75). These comparisons demonstrate that all the image analysis measures, bar the upper lip measure in the men, are suitable for use in skin aging research and highlight areas of improvement for future refinements of the techniques. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons.

  1. Facial Attractiveness Assessment using Illustrated Questionnairers

    PubMed Central

    MESAROS, ANCA; CORNEA, DANIELA; CIOARA, LIVIU; DUDEA, DIANA; MESAROS, MICHAELA; BADEA, MINDRA

    2015-01-01

    Introduction. An attractive facial appearance is considered nowadays to be a decisive factor in establishing successful interactions between humans. In relation to this topic, scientific literature states that some of the facial features have more impact then others, and important authors revealed that certain proportions between different anthropometrical landmarks are mandatory for an attractive facial appearance. Aim. Our study aims to assess if certain facial features count differently in people’s opinion while assessing facial attractiveness in correlation with factors such as age, gender, specific training and culture. Material and methods. A 5-item multiple choice illustrated questionnaire was presented to 236 dental students. The Photoshop CS3 software was used in order to obtain the sets of images for the illustrated questions. The original image was handpicked from the internet by a panel of young dentists from a series of 15 pictures of people considered to have attractive faces. For each of the questions, the images presented were simulating deviations from the ideally symmetric and proportionate face. The sets of images consisted in multiple variations of deviations mixed with the original photo. Junior and sophomore year students from our dental medical school, having different nationalities were required to participate in our questionnaire. Simple descriptive statistics were used to interpret the data. Results. Assessing the results obtained from the questionnaire it was observed that a majority of students considered as unattractive the overdevelopment of the lower third, while the initial image with perfect symmetry and proportion was considered as the most attractive by only 38.9% of the subjects. Likewise, regarding the symmetry 36.86% considered unattractive the canting of the inter-commissural line. The interviewed subjects considered that for a face to be attractive it needs to have harmonious proportions between the different facial elements. Conclusions. Considering an evaluation of facial attractiveness it is important to keep in mind that such assessment is subjective and influenced by multiple factors, among which the most important are cultural background and specific training. PMID:26528052

  2. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    PubMed

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p < 0.05). Further, the interaction of methods versus fiducial markers also showed significant difference (p < 0.05). The CBCT and facial moulage method showed the greatest accuracy. 3DP models fabricated using 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  3. Reduction of facial redness with resveratrol added to topical product containing green tea polyphenols and caffeine.

    PubMed

    Ferzli, Georgina; Patel, Mital; Phrsai, Natasha; Brody, Neil

    2013-07-01

    Many topical formulations include antioxidants to improve the antioxidant capability of the skin. This study evaluated the ability of a unique combination of antioxidants including resveratrol, green tea polyphenols, and caffeine to reduce facial redness. Subjects (n=16) presenting with facial redness applied the resveratrol-enriched product twice daily to the entire face. Reduction in redness was evaluated by trained staff members and dermatology house staff officers. Evaluators compared clinical photographs and spectrally enhanced images taken before treatment and at 2-week intervals for up to 12 weeks. 16 of 16 clinical images showed improvement and 13 of 16 spectrally enhanced images were improved. Reduction in facial redness continued to evolve over the duration of the study period but was generally detectable by 6 weeks of treatment. Adverse effects were not observed in any subject. The skin product combination of resveratrol, green tea polyphenols, and caffeine safely reduces facial redness in most patients by 6 weeks of continuous treatment and may provide further improvement with additional treatment.

  4. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  5. Gender differences in the motivational processing of facial beauty☆

    PubMed Central

    Levy, Boaz; Ariely, Dan; Mazar, Nina; Chi, Won; Lukas, Scott; Elman, Igor

    2013-01-01

    Gender may be involved in the motivational processing of facial beauty. This study applied a behavioral probe, known to activate brain motivational regions, to healthy heterosexual subjects. Matched samples of men and women were administered two tasks: (a) key pressing to change the viewing time of average or beautiful female or male facial images, and (b) rating the attractiveness of these images. Men expended more effort (via the key-press task) to extend the viewing time of the beautiful female faces. Women displayed similarly increased effort for beautiful male and female images, but the magnitude of this effort was substantially lower than that of men for beautiful females. Heterosexual facial attractiveness ratings were comparable in both groups. These findings demonstrate heterosexual specificity of facial motivational targets for men, but not for women. Moreover, heightened drive for the pursuit of heterosexual beauty in the face of regular valuational assessments, displayed by men, suggests a gender-specific incentive sensitization phenomenon. PMID:24282336

  6. Facial Anthropometric Evaluation of Unilateral Cleft Lip and Palate Patients: Infancy Through Adolescence.

    PubMed

    Dehghani, Mahboobe; Jahanbin, Arezoo; Omidkhoda, Maryam; Entezari, Mostafa; Shadkam, Elaheh

    2018-03-01

    Craniofacial anthropometric studies measure the differences in humans' craniofacial dimensions. The aim of this study was to determine facial anthropometric dimensions of newborn to 12-year-old girls with nonsyndromic unilateral cleft lip and palate (UCLP). In this cross-sectional analytical study, data was collected from 65 infant to 12-year old girls with UCLP. Digital frontal and profile facial photographs were transferred to a computer and desired anthropometric landmarks were traced on each image. Fifteen anthropometric parameters were measured which were the angles of facial, nasofacial, nasomental, Z, nasolabial, inclination of nasal base and labial fissure, nasal deviation, mentocervical, facial convexity and also ratios of nasal prominence relative to nasal height, middle to lower facial third, upper lip to lower lip height, columellar length relative to upper lip, and incisal show relative to incisal width. Pearson coefficient and linear regression were used for statistical analysis. Upper lip to lower lip height ratio and angles of nasofacial, nasolabial, and facial convexity decreased with the age of the patients. In contrast, nasomental angle and the ratios of columellar length to upper lip length, middle facial height to lower facial height, and incisal show relative to incisal width increased. Other parameters studied did not appear to have any significant correlation with age. In the girls with UCLP, various craniofacial dimensions have different growth rates with some parts growing slower than others. Some of the parameters studied were significantly correlated with age, thus growth-related curves and equations were obtained and presented.

  7. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  8. Identification using face regions: application and assessment in forensic scenarios.

    PubMed

    Tome, Pedro; Fierrez, Julian; Vera-Rodriguez, Ruben; Ramos, Daniel

    2013-12-10

    This paper reports an exhaustive analysis of the discriminative power of the different regions of the human face on various forensic scenarios. In practice, when forensic examiners compare two face images, they focus their attention not only on the overall similarity of the two faces. They carry out an exhaustive morphological comparison region by region (e.g., nose, mouth, eyebrows, etc.). In this scenario it is very important to know based on scientific methods to what extent each facial region can help in identifying a person. This knowledge obtained using quantitative and statical methods on given populations can then be used by the examiner to support or tune his observations. In order to generate such scientific knowledge useful for the expert, several methodologies are compared, such as manual and automatic facial landmarks extraction, different facial regions extractors, and various distances between the subject and the acquisition camera. Also, three scenarios of interest for forensics are considered comparing mugshot and Closed-Circuit TeleVision (CCTV) face images using MORPH and SCface databases. One of the findings is that depending of the acquisition distances, the discriminative power of the facial regions change, having in some cases better performance than the full face. Crown Copyright © 2013. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Finding Makhubu: A morphological forensic facial comparison.

    PubMed

    Houlton, T M R; Steyn, M

    2018-04-01

    June 16, 1976, marks the Soweto Youth Student Uprising in South Africa. A harrowing image capturing police brutality from that day comprises of 18-year-old Mbuyisa Makhubu carrying a dying 12-year-old Hector Peterson. This circulated international press and contributed to world pressure against the apartheid government. This elevated Makhubu's profile with the national security police and forced him to flee to Botswana, then Nigeria, before disappearing in 1978. In 1988, Victor Vinnetou illegally entered Canada and was later arrested on immigration charges in 2004. Evasive of his true identity, the Canadian Border Services Agency and Makhubu's family believe Vinnetou is Makhubu, linking them by a characteristic moon-shaped birthmark on his left chest. A performed DNA test however, was inconclusive. Following the continued 40-year mystery, Eye Witness News in 2016 requested further investigation. Using a limited series of portrait images, a forensic facial comparison (FFC) was conducted utilising South African Police Service (SAPS) protocols and Facial Identification Scientific Working Group (FISWG) guidelines. The images provided, presented a substantial time-lapse and generally low resolution, while being taken from irregular angles and distances, with different subject poses, orientations and environments. This enforced the use of a morphological analysis; a primary method of FFC that develops conclusions based on subjective observations. The results were fundamentally inconclusive, but multiple similarities and valid explanations for visible differences were identified. To enhance the investigation, visual evidence of the moon-shaped birthmark and further DNA analysis is required. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Correlation between presumed sinusitis-induced pain and paranasal sinus computed tomographic findings.

    PubMed

    Mudgil, Shikha P; Wise, Scott W; Hopper, Kenneth D; Kasales, Claudia J; Mauger, David; Fornadley, John A

    2002-02-01

    The correlation between facial and/or head pain in patients clinically suspected of having sinusitis and actual localized findings on sinus computed tomographic (CT) imaging are poorly understood. To prospectively evaluate the relationship of paranasal sinus pain symptoms with CT imaging. Two hundred consecutive patients referred by otolaryngologists and internists for CT of the paranasal sinuses participated by completing a questionnaire immediately before undergoing CT. Three radiologists blinded to the patients' responses scored the degree of air/fluid level, mucosal thickening, bony reaction, and mucus retention cysts using a graded scale of severity (0 to 3 points). The osteomeatal complexes and nasolacrimal ducts were also evaluated for patency. Bivariate analysis was performed to evaluate the relationship between patients' localized symptoms and CT findings in the respective sinus. One hundred sixty-three patients (82%) reported having some form of facial pain or headache. The right temple/forehead was the most frequently reported region of maximal pain. On CT imaging the maxillary sinus was the most frequently involved sinus. Bivariate analysis failed to show any relationship between patient symptoms and findings on CT. Patients with a normal CT reported a mean 5.88 sites of facial or head pain versus 5.45 sites for patients with an abnormal CT. Patient-based responses of sinonasal pain symptoms fail to correlate with findings in the respective sinuses. CT should therefore be reserved for delineating the anatomy and degree of sinus disease before surgical intervention.

  11. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  12. Comparison of the effect of labiolingual inclination and anteroposterior position of maxillary incisors on esthetic profile in three different facial patterns

    PubMed Central

    Chirivella, Praveen; Singaraju, Gowri Sankar; Mandava, Prasad; Reddy, V Karunakar; Neravati, Jeevan Kumar; George, Suja Ani

    2017-01-01

    Objective: To test the null hypothesis that there is no effect of esthetic perception of smiling profile in three different facial types by a change in the maxillary incisor inclination and position. Materials and Methods: A smiling profile photograph with Class I skeletal and dental pattern, normal profile were taken in each of the three facial types dolichofacial, mesofacial, and brachyfacial. Based on the original digital image, 15 smiling profiles in each of the facial types were created using the FACAD software by altering the labiolingual inclination and anteroposterior position of the maxillary incisors. These photographs were rated on a visual analog scale by three panels of examiners consisting of orthodontists, dentists, and nonprofessionals with twenty members in each group. The responses were assessed by analysis of variance (ANOVA) test followed by post hoc Scheffe. Results: Significant differences (P < 0.001) were detected when ratings of each photograph in each of the individual facial type was compared. In dolichofacial and mesofacial pattern, the position of the maxillary incisor must be limited to 2 mm from the goal anterior limit line. In brachyfacial pattern, any movement of facial axis point of maxillary incisors away from GALL is worsens the facial esthetics. The result of the ANOVA showed differences among the three groups for certain facial profiles. Conclusion: The hypothesis was rejected. The esthetic perception of labiolingual inclination and anteroposterior of maxillary incisors differ in different facial types, and this may effect in formulating treatment plans for different facial types. PMID:28197396

  13. Human Facial Shape and Size Heritability and Genetic Correlations.

    PubMed

    Cole, Joanne B; Manyama, Mange; Larson, Jacinda R; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Li, Mao; Mio, Washington; Klein, Ophir D; Santorico, Stephanie A; Hallgrímsson, Benedikt; Spritz, Richard A

    2017-02-01

    The human face is an array of variable physical features that together make each of us unique and distinguishable. Striking familial facial similarities underscore a genetic component, but little is known of the genes that underlie facial shape differences. Numerous studies have estimated facial shape heritability using various methods. Here, we used advanced three-dimensional imaging technology and quantitative human genetics analysis to estimate narrow-sense heritability, heritability explained by common genetic variation, and pairwise genetic correlations of 38 measures of facial shape and size in normal African Bantu children from Tanzania. Specifically, we fit a linear mixed model of genetic relatedness between close and distant relatives to jointly estimate variance components that correspond to heritability explained by genome-wide common genetic variation and variance explained by uncaptured genetic variation, the sum representing total narrow-sense heritability. Our significant estimates for narrow-sense heritability of specific facial traits range from 28 to 67%, with horizontal measures being slightly more heritable than vertical or depth measures. Furthermore, for over half of facial traits, >90% of narrow-sense heritability can be explained by common genetic variation. We also find high absolute genetic correlation between most traits, indicating large overlap in underlying genetic loci. Not surprisingly, traits measured in the same physical orientation (i.e., both horizontal or both vertical) have high positive genetic correlations, whereas traits in opposite orientations have high negative correlations. The complex genetic architecture of facial shape informs our understanding of the intricate relationships among different facial features as well as overall facial development. Copyright © 2017 by the Genetics Society of America.

  14. Facial Expression Presentation for Real-Time Internet Communication

    NASA Astrophysics Data System (ADS)

    Dugarry, Alexandre; Berrada, Aida; Fu, Shan

    2003-01-01

    Text, voice and video images are the most common forms of media content for instant communication on the Internet. Studies have shown that facial expressions convey much richer information than text and voice during a face-to-face conversation. The currently available real time means of communication (instant text messages, chat programs and videoconferencing), however, have major drawbacks in terms of exchanging facial expression. The first two means do not involve the image transmission, whilst video conferencing requires a large bandwidth that is not always available, and the transmitted image sequence is neither smooth nor without delay. The objective of the work presented here is to develop a technique that overcomes these limitations, by extracting the facial expression of speakers and to realise real-time communication. In order to get the facial expressions, the main characteristics of the image are emphasized. Interpolation is performed on edge points previously detected to create geometric shapes such as arcs, lines, etc. The regional dominant colours of the pictures are also extracted and the combined results are subsequently converted into Scalable Vector Graphics (SVG) format. The application based on the proposed technique aims at being used simultaneously with chat programs and being able to run on any platform.

  15. Functional Alterations of Postcentral Gyrus Modulated by Angry Facial Expressions during Intraoral Tactile Stimuli in Patients with Burning Mouth Syndrome: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Yoshino, Atsuo; Okamoto, Yasumasa; Doi, Mitsuru; Okada, Go; Takamura, Masahiro; Ichikawa, Naho; Yamawaki, Shigeto

    2017-01-01

    Previous findings suggest that negative emotions could influence abnormal sensory perception in burning mouth syndrome (BMS). However, few studies have investigated the underlying neural mechanisms associated with BMS. We examined activation of brain regions in response to intraoral tactile stimuli when modulated by angry facial expressions. We performed functional magnetic resonance imaging on a group of 27 BMS patients and 21 age-matched healthy controls. Tactile stimuli were presented during different emotional contexts, which were induced via the continuous presentation of angry or neutral pictures of human faces. BMS patients exhibited higher tactile ratings and greater activation in the postcentral gyrus during the presentation of tactile stimuli involving angry faces relative to controls. Significant positive correlations between changes in brain activation elicited by angry facial images in the postcentral gyrus and changes in tactile rating scores by angry facial images were found for both groups. For BMS patients, there was a significant positive correlation between changes in tactile-related activation of the postcentral gyrus elicited by angry facial expressions and pain intensity in daily life. Findings suggest that neural responses in the postcentral gyrus are more strongly affected by angry facial expressions in BMS patients, which may reflect one possible mechanism underlying impaired somatosensory system function in this disorder. PMID:29163243

  16. Direction of Amygdala-Neocortex Interaction During Dynamic Facial Expression Processing.

    PubMed

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Yoshikawa, Sakiko; Toichi, Motomi

    2017-03-01

    Dynamic facial expressions of emotion strongly elicit multifaceted emotional, perceptual, cognitive, and motor responses. Neuroimaging studies revealed that some subcortical (e.g., amygdala) and neocortical (e.g., superior temporal sulcus and inferior frontal gyrus) brain regions and their functional interaction were involved in processing dynamic facial expressions. However, the direction of the functional interaction between the amygdala and the neocortex remains unknown. To investigate this issue, we re-analyzed functional magnetic resonance imaging (fMRI) data from 2 studies and magnetoencephalography (MEG) data from 1 study. First, a psychophysiological interaction analysis of the fMRI data confirmed the functional interaction between the amygdala and neocortical regions. Then, dynamic causal modeling analysis was used to compare models with forward, backward, or bidirectional effective connectivity between the amygdala and neocortical networks in the fMRI and MEG data. The results consistently supported the model of effective connectivity from the amygdala to the neocortex. Further increasing time-window analysis of the MEG demonstrated that this model was valid after 200 ms from the stimulus onset. These data suggest that emotional processing in the amygdala rapidly modulates some neocortical processing, such as perception, recognition, and motor mimicry, when observing dynamic facial expressions of emotion. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Principal component analysis of three-dimensional face shape: Identifying shape features that change with age.

    PubMed

    Kurosumi, M; Mizukoshi, K

    2018-05-01

    The types of shape feature that constitutes a face have not been comprehensively established, and most previous studies of age-related changes in facial shape have focused on individual characteristics, such as wrinkle, sagging skin, etc. In this study, we quantitatively measured differences in face shape between individuals and investigated how shape features changed with age. We analyzed three-dimensionally the faces of 280 Japanese women aged 20-69 years and used principal component analysis to establish the shape features that characterized individual differences. We also evaluated the relationships between each feature and age, clarifying the shape features characteristic of different age groups. Changes in facial shape in middle age were a decreased volume of the upper face and increased volume of the whole cheeks and around the chin. Changes in older people were an increased volume of the lower cheeks and around the chin, sagging skin, and jaw distortion. Principal component analysis was effective for identifying facial shape features that represent individual and age-related differences. This method allowed straightforward measurements, such as the increase or decrease in cheeks caused by soft tissue changes or skeletal-based changes to the forehead or jaw, simply by acquiring three-dimensional facial images. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Post-traumatic Unilateral Avulsion of the Abducens Nerve with Damage to Cranial Nerves VII and VIII: Case Report.

    PubMed

    Yamasaki, Fumiyuki; Akiyama, Yuji; Tsumura, Ryu; Kolakshyapati, Manish; Adhikari, Rupendra Bahadur; Takayasu, Takeshi; Nosaka, Ryo; Kurisu, Kaoru

    2016-07-01

    Traumatic injuries of the abducens nerve as a consequence of facial and/or head trauma occur with or without associated cervical or skull base fracture. This is the first report on unilateral avulsion of the abducens nerve in a 29-year-old man with severe right facial trauma. In addition, he exhibited mild left facial palsy, and moderate left hearing disturbance. Magnetic resonance imaging (MRI) using fast imaging employing steady-state acquisition (FIESTA) revealed avulsion of left sixth cranial nerve. We recommend thin-slice MR examination in patients with abducens palsy after severe facial and/or head trauma.

  19. Does skull shape mediate the relationship between objective features and subjective impressions about the face?

    PubMed

    Marečková, Klára; Chakravarty, M Mallar; Huang, Mei; Lawrence, Claire; Leonard, Gabriel; Perron, Michel; Pike, Bruce G; Richer, Louis; Veillette, Suzanne; Pausova, Zdenka; Paus, Tomáš

    2013-10-01

    In our previous work, we described facial features associated with a successful recognition of the sex of the face (Marečková et al., 2011). These features were based on landmarks placed on the surface of faces reconstructed from magnetic resonance (MR) images; their position was therefore influenced by both soft tissue (fat and muscle) and bone structure of the skull. Here, we ask whether bone structure has dissociable influences on observers' identification of the sex of the face. To answer this question, we used a novel method of studying skull morphology using MR images and explored the relationship between skull features, facial features, and sex recognition in a large sample of adolescents (n=876; including 475 adolescents from our original report). To determine whether skull features mediate the relationship between facial features and identification accuracy, we performed mediation analysis using bootstrapping. In males, skull features mediated fully the relationship between facial features and sex judgments. In females, the skull mediated this relationship only after adjusting facial features for the amount of body fat (estimated with bioimpedance). While body fat had a very slight positive influence on correct sex judgments about male faces, there was a robust negative influence of body fat on the correct sex judgments about female faces. Overall, these results suggest that craniofacial bone structure is essential for correct sex judgments about a male face. In females, body fat influences negatively the accuracy of sex judgments, and craniofacial bone structure alone cannot explain the relationship between facial features and identification of a face as female. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Influence of using a single facial vein as outflow in full-face transplantation: A three-dimensional computed tomographic study.

    PubMed

    Rodriguez-Lorenzo, Andres; Audolfsson, Thorir; Wong, Corrine; Cheng, Angela; Arbique, Gary; Nowinski, Daniel; Rozen, Shai

    2015-10-01

    The aim of this study was to evaluate the contribution of a single unilateral facial vein in the venous outflow of total-face allograft using three-dimensional computed tomographic imaging techniques to further elucidate the mechanisms of venous complications following total-face transplant. Full-face soft-tissue flaps were harvested from fresh adult human cadavers. A single facial vein was identified and injected distally to the submandibular gland with a radiopaque contrast (barium sulfate/gelatin mixture) in every specimen. Following vascular injections, three-dimensional computed tomographic venographies of the faces were performed. Images were viewed using TeraRecon Software (Teracon, Inc., San Mateo, CA, USA) allowing analysis of the venous anatomy and perfusion in different facial subunits by observing radiopaque filling venous patterns. Three-dimensional computed tomographic venographies demonstrated a venous network with different degrees of perfusion in subunits of the face in relation to the facial vein injection side: 100% of ipsilateral and contralateral forehead units, 100% of ipsilateral and 75% of contralateral periorbital units, 100% of ipsilateral and 25% of contralateral cheek units, 100% of ipsilateral and 75% of contralateral nose units, 100% of ipsilateral and 75% of contralateral upper lip units, 100% of ipsilateral and 25% of contralateral lower lip units, and 50% of ipsilateral and 25% of contralateral chin units. Venographies of the full-face grafts revealed better perfusion in the ipsilateral hemifaces from the facial vein in comparison with the contralateral hemifaces. Reduced perfusion was observed mostly in the contralateral cheek unit and contralateral lower face including the lower lip and chin units. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. Development of a quantitative assessment method of pigmentary skin disease using ultraviolet optical imaging.

    PubMed

    Lee, Onseok; Park, Sunup; Kim, Jaeyoung; Oh, Chilhwan

    2017-11-01

    The visual scoring method has been used as a subjective evaluation of pigmentary skin disorders. Severity of pigmentary skin disease, especially melasma, is evaluated using a visual scoring method, the MASI (melasma area severity index). This study differentiates between epidermal and dermal pigmented disease. The study was undertaken to determine methods to quantitatively measure the severity of pigmentary skin disorders under ultraviolet illumination. The optical imaging system consists of illumination (white LED, UV-A lamp) and image acquisition (DSLR camera, air cooling CMOS CCD camera). Each camera is equipped with a polarizing filter to remove glare. To analyze images of visible and UV light, images are divided into frontal, cheek, and chin regions of melasma patients. Each image must undergo image processing. To reduce the curvature error in facial contours, a gradient mask is used. The new method of segmentation of front and lateral facial images is more objective for face-area-measurement than the MASI score. Image analysis of darkness and homogeneity is adequate to quantify the conventional MASI score. Under visible light, active lesion margins appear in both epidermal and dermal melanin, whereas melanin is found in the epidermis under UV light. This study objectively analyzes severity of melasma and attempts to develop new methods of image analysis with ultraviolet optical imaging equipment. Based on the results of this study, our optical imaging system could be used as a valuable tool to assess the severity of pigmentary skin disease. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. NATIONAL PREPAREDNESS: Technologies to Secure Federal Buildings

    DTIC Science & Technology

    2002-04-25

    Medium, some resistance based on sensitivity of eye Facial recognition Facial features are captured and compared Dependent on lighting, positioning...two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are...an adjacent feature. Attachment I—Access Control Technologies: Biometrics Facial Recognition How the technology works

  3. What does magnetic resonance imaging add to the prenatal ultrasound diagnosis of facial clefts?

    PubMed

    Mailáth-Pokorny, M; Worda, C; Krampl-Bettelheim, E; Watzinger, F; Brugger, P C; Prayer, D

    2010-10-01

    Ultrasound is the modality of choice for prenatal detection of cleft lip and palate. Because its accuracy in detecting facial clefts, especially isolated clefts of the secondary palate, can be limited, magnetic resonance imaging (MRI) is used as an additional method for assessing the fetus. The aim of this study was to investigate the role of fetal MRI in the prenatal diagnosis of facial clefts. Thirty-four pregnant women with a mean gestational age of 26 (range, 19-34) weeks underwent in utero MRI, after ultrasound examination had identified either a facial cleft (n = 29) or another suspected malformation (micrognathia (n = 1), cardiac defect (n = 1), brain anomaly (n = 2) or diaphragmatic hernia (n = 1)). The facial cleft was classified postnatally and the diagnoses were compared with the previous ultrasound findings. There were 11 (32.4%) cases with cleft of the primary palate alone, 20 (58.8%) clefts of the primary and secondary palate and three (8.8%) isolated clefts of the secondary palate. In all cases the primary and secondary palate were visualized successfully with MRI. Ultrasound imaging could not detect five (14.7%) facial clefts and misclassified 15 (44.1%) facial clefts. The MRI classification correlated with the postnatal/postmortem diagnosis. In our hands MRI allows detailed prenatal evaluation of the primary and secondary palate. By demonstrating involvement of the palate, MRI provides better detection and classification of facial clefts than does ultrasound alone. Copyright © 2010 ISUOG. Published by John Wiley & Sons, Ltd.

  4. Design and fabrication of facial prostheses for cancer patient applying computer aided method and manufacturing (CADCAM)

    NASA Astrophysics Data System (ADS)

    Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana

    2016-12-01

    Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.

  5. Accuracy of computer-assisted navigation: significant augmentation by facial recognition software.

    PubMed

    Glicksman, Jordan T; Reger, Christine; Parasher, Arjun K; Kennedy, David W

    2017-09-01

    Over the past 20 years, image guidance navigation has been used with increasing frequency as an adjunct during sinus and skull base surgery. These devices commonly utilize surface registration, where varying pressure of the registration probe and loss of contact with the face during the skin tracing process can lead to registration inaccuracies, and the number of registration points incorporated is necessarily limited. The aim of this study was to evaluate the use of novel facial recognition software for image guidance registration. Consecutive adults undergoing endoscopic sinus surgery (ESS) were prospectively studied. Patients underwent image guidance registration via both conventional surface registration and facial recognition software. The accuracy of both registration processes were measured at the head of the middle turbinate (MTH), middle turbinate axilla (MTA), anterior wall of sphenoid sinus (SS), and nasal tip (NT). Forty-five patients were included in this investigation. Facial recognition was accurate to within a mean of 0.47 mm at the MTH, 0.33 mm at the MTA, 0.39 mm at the SS, and 0.36 mm at the NT. Facial recognition was more accurate than surface registration at the MTH by an average of 0.43 mm (p = 0.002), at the MTA by an average of 0.44 mm (p < 0.001), and at the SS by an average of 0.40 mm (p < 0.001). The integration of facial recognition software did not adversely affect registration time. In this prospective study, automated facial recognition software significantly improved the accuracy of image guidance registration when compared to conventional surface registration. © 2017 ARS-AAOA, LLC.

  6. The possibility of application of spiral brain computed tomography to traumatic brain injury.

    PubMed

    Lim, Daesung; Lee, Soo Hoon; Kim, Dong Hoon; Choi, Dae Seub; Hong, Hoon Pyo; Kang, Changwoo; Jeong, Jin Hee; Kim, Seong Chun; Kang, Tae-Sin

    2014-09-01

    The spiral computed tomography (CT) with the advantage of low radiation dose, shorter test time required, and its multidimensional reconstruction is accepted as an essential diagnostic method for evaluating the degree of injury in severe trauma patients and establishment of therapeutic plans. However, conventional sequential CT is preferred for the evaluation of traumatic brain injury (TBI) over spiral CT due to image noise and artifact. We aimed to compare the diagnostic power of spiral facial CT for TBI to that of conventional sequential brain CT. We evaluated retrospectively the images of 315 traumatized patients who underwent both brain CT and facial CT simultaneously. The hemorrhagic traumatic brain injuries such as epidural hemorrhage, subdural hemorrhage, subarachnoid hemorrhage, and contusional hemorrhage were evaluated in both images. Statistics were performed using Cohen's κ to compare the agreement between 2 imaging modalities and sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT to conventional sequential brain CT. Almost perfect agreement was noted regarding hemorrhagic traumatic brain injuries between spiral facial CT and conventional sequential brain CT (Cohen's κ coefficient, 0.912). To conventional sequential brain CT, sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT were 92.2%, 98.1%, 95.9%, and 96.3%, respectively. In TBI, the diagnostic power of spiral facial CT was equal to that of conventional sequential brain CT. Therefore, expanded spiral facial CT covering whole frontal lobe can be applied to evaluate TBI in the future. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Estimation of 2D to 3D dimensions and proportionality indices for facial examination.

    PubMed

    Martos, Rubén; Valsecchi, Andrea; Ibáñez, Oscar; Alemán, Inmaculada

    2018-06-01

    Photo-anthropometry is a metric-based facial image comparison technique where measurements of the face are taken from an image using predetermined facial landmarks. In particular, dimensions and proportionality indices (DPIs) are compared to DPIs from another facial image. Different studies concluded that photo-anthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. The major limitation is the need for images acquired under very restrictive, controlled conditions. To overcome this latter issue, we propose a novel methodology to estimate 3D DPIs from 2D ones. It uses computer graphic techniques to simulate thousands of facial photographs under known camera conditions and regression to derive the mathematical relationship between 2D and 3D DPIs automatically. Additionally, we present a methodology that makes use of the estimated 3D DPIs for reducing the number of potential matches of a given unknown facial photograph within a set of known candidates. The error in the estimation of the 3D DPIs can be as large as 35%, but both I and III quartiles are consistently inside the ±5% range. The methodology for filtering cases has demonstrated to be useful in the task of narrowing down the list of possible candidates for a given photograph. It is able to remove on average (validated using cross-validation technique) 57% and 24% of the negative cases, depending on the amounts of DPIs available. Limitations of the work developed together with open research lines are included within the Discussion section. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Effects of electroacupuncture therapy for Bell's palsy from acute stage: study protocol for a randomized controlled trial.

    PubMed

    Liu, Zhi-dan; He, Jiang-bo; Guo, Si-si; Yang, Zhi-xin; Shen, Jun; Li, Xiao-yan; Liang, Wei; Shen, Wei-dong

    2015-08-25

    Although many patients with facial paralysis have obtained benefits or completely recovered after acupuncture or electroacupuncture therapy, it is still difficult to list intuitive evidence besides evaluation using neurological function scales and a few electrophysiologic data. Hence, the aim of this study is to use more intuitive and reliable detection techniques such as facial nerve magnetic resonance imaging (MRI), nerve electromyography, and F waves to observe changes in the anatomic morphology of facial nerves and nerve conduction before and after applying acupuncture or electroacupuncture, and to verify their effectiveness by combining neurological function scales. A total of 132 patients with Bell's palsy (grades III and IV in the House-Brackmann [HB] Facial Nerve Grading System) will be randomly divided into electroacupuncture, manual acupuncture, non-acupuncture, and medicine control groups. All the patients will be given electroacupuncture treatment after the acute period, except for patients in the medicine control group. The acupuncture or electroacupuncture treatments will be performed every 2 days until the patients recover or withdraw from the study. The primary outcome is analysis based on facial nerve functional scales (HB scale and Sunnybrook facial grading system), and the secondary outcome is analysis based on MRI, nerve electromyography and F-wave detection. All the patients will undergo MRI within 3 days after Bell's palsy onset for observation of the signal intensity and facial nerve swelling of the unaffected and affected sides. They will also undergo facial nerve electromyography and F-wave detection within 1 week after onset of Bell's palsy. Nerve function will be evaluated using the HB scale and Sunnybrook facial grading system at each hospital visit for treatment until the end of the study. The MRI, nerve electromyography, and F-wave detection will be performed again at 1 month after the onset of Bell's palsy. Chinese Clinical Trials Register identifier: ChiCTR-IPR-14005730. Registered on 23 December 2014.

  9. A Neuromonitoring Approach to Facial Nerve Preservation During Image-guided Robotic Cochlear Implantation.

    PubMed

    Ansó, Juan; Dür, Cilgia; Gavaghan, Kate; Rohrbach, Helene; Gerber, Nicolas; Williamson, Tom; Calvo, Enric M; Balmer, Thomas Wyss; Precht, Christina; Ferrario, Damien; Dettmer, Matthias S; Rösler, Kai M; Caversaccio, Marco D; Bell, Brett; Weber, Stefan

    2016-01-01

    A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1  mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1  mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250  μs. Only at low stimulus intensity values (≤0.3  mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5  mm. However, reduction in stimulus threshold to 0.3  mA or lower resulted in a decrease of facial nerve distance detection range below 0.1  mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1  mm) revealed either mild or inexistent damage to the nerve fascicles. Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.

  10. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    PubMed

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  11. The Relative Importance of Sexual Dimorphism, Fluctuating Asymmetry, and Color Cues to Health during Evaluation of Potential Partners' Facial Photographs : A Conjoint Analysis Study.

    PubMed

    Mogilski, Justin K; Welling, Lisa L M

    2017-03-01

    Sexual dimorphism, symmetry, and coloration in human faces putatively signal information relevant to mate selection and reproduction. Although the independent contributions of these characteristics to judgments of attractiveness are well established, relatively few studies have examined whether individuals prioritize certain features over others. Here, participants (N = 542, 315 female) ranked six sets of facial photographs (3 male, 3 female) by their preference for starting long- and short-term romantic relationships with each person depicted. Composite-based digital transformations were applied such that each image set contained 11 different versions of the same identity. Each photograph in each image set had a unique combination of three traits: sexual dimorphism, symmetry, and color cues to health. Using conjoint analysis to evaluate participants' ranking decisions, we found that participants prioritized cues to sexual dimorphism over symmetry and color cues to health. Sexual dimorphism was also found to be relatively more important for the evaluation of male faces than for female faces, whereas symmetry and color cues to health were relatively more important for the evaluation of female faces than for male faces. Symmetry and color cues to health were more important for long-term versus short-term evaluations for female faces, but not male faces. Analyses of utility estimates reveal that our data are consistent with research showing that preferences for facial masculinity and femininity in male and female faces vary according to relationship context. These findings are interpreted in the context of previous work examining the influence of these facial attributes on romantic partner perception.

  12. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  13. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    PubMed

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks.

  14. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    PubMed

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions.

  15. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    PubMed Central

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the unconscious perception of peak facial expressions. PMID:27630604

  16. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    PubMed Central

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally formulated performing styles when evaluating the emotions of the Noh masks. PMID:23185595

  17. Creation of three-dimensional craniofacial standards from CBCT images

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  18. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    PubMed

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  19. Importance of the brow in facial expressiveness during human communication.

    PubMed

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  20. Treatment of facial lipoatrophy with polymethylmethacrylate among patients with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS): impact on the quality of life.

    PubMed

    Quintas, Rodrigo C S; de França, Emmanuel R; de Petribú, Kátia C L; Ximenes, Ricardo A A; Quintas, Lóren F F M; Cavalcanti, Ernando L F; Kitamura, Marco A P; Magalhães, Kássia A A; Paiva, Késsia C F; Filho, Demócrito B Miranda

    2014-04-01

    The lipodystrophy syndrome is characterized by selective loss of subcutaneous fat on the face and extremities (lipoatrophy) and/or accumulation of fat around the neck, abdomen, and thorax (lipohypertrophy). The aim of this study has been to assess the impact of polymethylmethacrylate facial treatment on quality of life, self-perceived facial image, and the severity of depressive symptoms in patients living with HIV/AIDS. A non-randomized before and after interventional study was developed. Fifty-one patients underwent facial filling. The self-perceived quality of life, facial image, and degree of depressive symptoms were measured by the Short-Form 36 and HIV/AIDS--Targeted quality of life questionnaires, by a visual analogue scale and by the Beck depression inventory, respectively, before and three months after treatment. Six of the eight domains of Short-Form 36 and eight of the nine dimensions of the HIV/AIDS--Targeted quality of life questionnaires, together with the visual analogue scale and by the Beck depression inventory scores, revealed a statistically significant improvement. The only adverse effects registered were edema and ecchymosis. The treatment of facial lipoatrophy improved the self-perceived quality of life and facial image as well as any depressive symptoms among patients with HIV/AIDS. © 2014 The International Society of Dermatology.

  1. Facial Nerve Schwannoma: A Case Report, Radiological Features and Literature Review.

    PubMed

    Pilloni, Giulia; Mico, Barbara Massa; Altieri, Roberto; Zenga, Francesco; Ducati, Alessandro; Garbossa, Diego; Tartara, Fulvio

    2017-12-22

    Facial nerve schwannoma localized in the middle fossa is a rare lesion. We report a case of a facial nerve schwannoma in a 30-year-old male presenting with facial nerve palsy. Magnetic resonance imaging (MRI) showed a 3 cm diameter tumor of the right middle fossa. The tumor was removed using a sub-temporal approach. Intraoperative monitoring allowed for identification of the facial nerve, so it was not damaged during the surgical excision. Neurological clinical examination at discharge demonstrated moderate facial nerve improvement (Grade III House-Brackmann).

  2. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    PubMed

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  3. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    PubMed Central

    2012-01-01

    Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD. PMID:22889284

  4. Evaluation of pharyngeal space and its correlation with mandible and hyoid bone in patients with different skeletal classes and facial types.

    PubMed

    Nejaim, Yuri; Aps, Johan K M; Groppo, Francisco Carlos; Haiter Neto, Francisco

    2018-06-01

    The purpose of this article was to evaluate the pharyngeal space volume, and the size and shape of the mandible and the hyoid bone, as well as their relationships, in patients with different facial types and skeletal classes. Furthermore, we estimated the volume of the pharyngeal space with a formula using only linear measurements. A total of 161 i-CAT Next Generation (Imaging Sciences International, Hatfield, Pa) cone-beam computed tomography images (80 men, 81 women; ages, 21-58 years; mean age, 27 years) were retrospectively studied. Skeletal class and facial type were determined for each patient from multiplanar reconstructions using the NemoCeph software (Nemotec, Madrid, Spain). Linear and angular measurements were performed using 3D imaging software (version 3.4.3; Carestream Health, Rochester, NY), and volumetric analysis of the pharyngeal space was carried out with ITK-SNAP (version 2.4.0; Cognitica, Philadelphia, Pa) segmentation software. For the statistics, analysis of variance and the Tukey test with a significance level of 0.05, Pearson correlation, and linear regression were used. The pharyngeal space volume, when correlated with mandible and hyoid bone linear and angular measurements, showed significant correlations with skeletal class or facial type. The linear regression performed to estimate the volume of the pharyngeal space showed an R of 0.92 and an adjusted R 2 of 0.8362. There were significant correlations between pharyngeal space volume, and the mandible and hyoid bone measurements, suggesting that the stomatognathic system should be evaluated in an integral and nonindividualized way. Furthermore, it was possible to develop a linear regression model, resulting in a useful formula for estimating the volume of the pharyngeal space. Copyright © 2018 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  5. Facial Age Synthesis Using Sparse Partial Least Squares (The Case of Ben Needham).

    PubMed

    Bukar, Ali M; Ugail, Hassan

    2017-09-01

    Automatic facial age progression (AFAP) has been an active area of research in recent years. This is due to its numerous applications which include searching for missing. This study presents a new method of AFAP. Here, we use an active appearance model (AAM) to extract facial features from available images. An aging function is then modelled using sparse partial least squares regression (sPLS). Thereafter, the aging function is used to render new faces at different ages. To test the accuracy of our algorithm, extensive evaluation is conducted using a database of 500 face images with known ages. Furthermore, the algorithm is used to progress Ben Needham's facial image that was taken when he was 21 months old to the ages of 6, 14, and 22 years. The algorithm presented in this study could potentially be used to enhance the search for missing people worldwide. © 2017 American Academy of Forensic Sciences.

  6. Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery

    NASA Astrophysics Data System (ADS)

    Voormolen, Eduard H. J.; van Stralen, Marijn; Woerdeman, Peter A.; Pluim, Josien P. W.; Noordmans, Herke J.; Regli, Luca; Berkelbach van der Sprenkel, Jan W.; Viergever, Max A.

    2011-03-01

    Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40+/-0.20 mm (mean+/-standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy.

  7. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  8. Diagnosis and surgical outcomes of intraparotid facial nerve schwannoma showing normal facial nerve function.

    PubMed

    Lee, D W; Byeon, H K; Chung, H P; Choi, E C; Kim, S-H; Park, Y M

    2013-07-01

    The findings of intraparotid facial nerve schwannoma (FNS) using preoperative diagnostic tools, including ultrasonography (US)-guided fine needle aspiration biopsy, computed tomography (CT) scan, and magnetic resonance imaging (MRI), were analyzed to determine if there are any useful findings that might suggest the presence of a lesion. Treatment guidelines are suggested. The medical records of 15 patients who were diagnosed with an intraparotid FNS were retrospectively analyzed. US and CT scans provide clinicians with only limited information; gadolinium enhanced T1-weighted images from MRI provide more specific findings. Tumors could be removed successfully with surgical exploration, preserving facial nerve function at the same time. Gadolinium-enhanced T1-weighted MRI showed more characteristic findings for the diagnosis of intraparotid FNS. Intraparotid FNS without facial palsy can be diagnosed with MRI preoperatively, and surgical exploration is a suitable treatment modality which can remove the tumor and preserve facial nerve function. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  9. Three-dimensional analysis of facial morphology.

    PubMed

    Liu, Yun; Kau, Chung How; Talbert, Leslie; Pan, Feng

    2014-09-01

    The objectives of this study were to evaluate sexual dimorphism for facial features within Chinese and African American populations and to compare the facial morphology by sex between these 2 populations. Three-dimensional facial images were acquired by using the portable 3dMDface System, which captured 189 subjects from 2 population groups of Chinese (n = 72) and African American (n = 117). Each population was categorized into male and female groups for evaluation. All subjects in the groups were aged between 18 and 30 years and had no apparent facial anomalies. A total of 23 anthropometric landmarks were identified on the three-dimensional faces of each subject. Twenty-one measurements in 4 regions, including 19 distances and 2 angles, were not only calculated but also compared within and between the Chinese and African American populations. The Student's t-test was used to analyze each data set obtained within each subgroup. Distinct facial differences were presented between the examined subgroups. When comparing the sex differences of facial morphology in the Chinese population, significant differences were noted in 71.43% of the parameters calculated, and the same proportion was found in the African American group. The facial morphologic differences between the Chinese and African American populations were evaluated by sex. The proportion of significant differences in the parameters calculated was 90.48% for females and 95.24% for males between the 2 populations. The African American population had a more convex profile and greater face width than those of the Chinese population. Sexual dimorphism for facial features was presented in both the Chinese and African American populations. In addition, there were significant differences in facial morphology between these 2 populations.

  10. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    PubMed

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  11. Understanding 'not': neuropsychological dissociations between hand and head markers of negation in BSL.

    PubMed

    Atkinson, Jo; Campbell, Ruth; Marshall, Jane; Thacker, Alice; Woll, Bencie

    2004-01-01

    Simple negation in natural languages represents a complex interrelationship of syntax, prosody, semantics and pragmatics, and may be realised in various ways: lexically, morphologically and prosodically. In almost all spoken languages, the first two of these are the primary realisations of syntactic negation. In contrast, in many signed languages negation can occur without lexical or morphological marking. Thus, in British Sign Language (BSL), negation is obligatorily expressed using face-head actions alone (facial negation) with the option of articulating a manual form alongside the required face-head actions (lexical negation). What are the processes underlying facial negation? Here, we explore this question neuropsychologically. If facial negation reflects lexico-syntactic processing in BSL, it may be relatively spared in people with unilateral right hemisphere (RH) lesions, as has been suggested for other 'grammatical facial actions' [Language and Speech 42 (1999) 307; Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum (Lawrence)]. Three BSL users with RH lesions were specifically impaired in perceiving facial compared with manual (lexical and morphological) negation. This dissociation was absent in three users of BSL with left hemisphere lesions and different degrees of language disorder, who also showed relative sparing of negation comprehension. We conclude that, in contrast to some analyses [Applied Psycholinguistics 18 (1997) 411; Emmorey, K. (2002). Language, cognition and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum (Lawrence); Archives of Neurology 36 (1979) 837], non-manual negation in sign may not be a direct surface realisation of syntax [Language and Speech 42 (1999) 143; Language and Speech 42 (1999) 127]. Difficulties with facial negation in the RH-lesion group were associated with specific impairments in processing facial images, including facial expressions. However, they did not reflect generalised 'face-blindness', since the reading of (English) speech patterns from faces was spared in this group. We propose that some aspects of the linguistic analysis of sign language are achieved by prosodic analysis systems (analysis of face and head gestures), which are lateralised to the minor hemisphere.

  12. Population calcium imaging of spontaneous respiratory and novel motor activity in the facial nucleus and ventral brainstem in newborn mice

    PubMed Central

    Persson, Karin; Rekling, Jens C

    2011-01-01

    Abstract The brainstem contains rhythm and pattern forming circuits, which drive cranial and spinal motor pools to produce respiratory and other motor patterns. Here we used calcium imaging combined with nerve recordings in newborn mice to reveal spontaneous population activity in the ventral brainstem and in the facial nucleus. In Fluo-8 AM loaded brainstem–spinal cord preparations, respiratory activity on cervical nerves was synchronized with calcium signals at the ventrolateral brainstem surface. Individual ventrolateral neurons at the level of the parafacial respiratory group showed perfect or partial synchrony with respiratory nerve bursts. In brainstem–spinal cord preparations, cut at the level of the mid-facial nucleus, calcium signals were recorded in the dorsal, lateral and medial facial subnuclei during respiratory activity. Strong activity initiated in the dorsal subnucleus, followed by activity in lateral and medial subnuclei. Whole-cell recordings from facial motoneurons showed weak respiratory drives, and electrical field potential recordings confirmed respiratory drive to particularly the dorsal and lateral subnuclei. Putative facial premotoneurons showed respiratory-related calcium signals, and were predominantly located dorsomedial to the facial nucleus. A novel motor activity on facial, cervical and thoracic nerves was synchronized with calcium signals at the ventromedial brainstem extending from the level of the facial nucleus to the medulla–spinal cord border. Cervical dorsal root stimulation induced similar ventromedial activity. The medial facial subnucleus showed calcium signals synchronized with this novel motor activity on cervical nerves, and cervical dorsal root stimulation induced similar medial facial subnucleus activity. In conclusion, the dorsal and lateral facial subnuclei are strongly respiratory-modulated, and the brainstem contains a novel pattern forming circuit that drives the medial facial subnucleus and cervical motor pools. PMID:21486812

  13. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  14. Measuring Symmetry in Children With Unrepaired Cleft Lip: Defining a Standard for the Three-Dimensional Midfacial Reference Plane.

    PubMed

    Wu, Jia; Heike, Carrie; Birgfeld, Craig; Evans, Kelly; Maga, Murat; Morrison, Clinton; Saltzman, Babette; Shapiro, Linda; Tse, Raymond

    2016-11-01

      Quantitative measures of facial form to evaluate treatment outcomes for cleft lip (CL) are currently limited. Computer-based analysis of three-dimensional (3D) images provides an opportunity for efficient and objective analysis. The purpose of this study was to define a computer-based standard of identifying the 3D midfacial reference plane of the face in children with unrepaired cleft lip for measurement of facial symmetry.   The 3D images of 50 subjects (35 with unilateral CL, 10 with bilateral CL, five controls) were included in this study.   Five methods of defining a midfacial plane were applied to each image, including two human-based (Direct Placement, Manual Landmark) and three computer-based (Mirror, Deformation, Learning) methods.   Six blinded raters (three cleft surgeons, two craniofacial pediatricians, and one craniofacial researcher) independently ranked and rated the accuracy of the defined planes.   Among computer-based methods, the Deformation method performed significantly better than the others. Although human-based methods performed best, there was no significant difference compared with the Deformation method. The average correlation coefficient among raters was .4; however, it was .7 and .9 when the angular difference between planes was greater than 6° and 8°, respectively.   Raters can agree on the 3D midfacial reference plane in children with unrepaired CL using digital surface mesh. The Deformation method performed best among computer-based methods evaluated and can be considered a useful tool to carry out automated measurements of facial symmetry in children with unrepaired cleft lip.

  15. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing

    PubMed Central

    2017-01-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816

  16. A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.

    PubMed

    Hosoya, Haruo; Hyvärinen, Aapo

    2017-07-01

    Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.

  17. Three-dimensional facial anthropometry of unilateral cleft lip infants with a structured light scanning system.

    PubMed

    Li, Guanghui; Wei, Jianhua; Wang, Xi; Wu, Guofeng; Ma, Dandan; Wang, Bo; Liu, Yanpu; Feng, Xinghua

    2013-08-01

    Cleft lip in the presence or absence of a cleft palate is a major public health problem. However, few studies have been published concerning the soft-tissue morphology of cleft lip infants. Currently, obtaining reliable three-dimensional (3D) surface models of infants remains a challenge. The aim of this study was to investigate a new way of capturing 3D images of cleft lip infants using a structured light scanning system. In addition, the accuracy and precision of the acquired facial 3D data were validated and compared with direct measurements. Ten unilateral cleft lip patients were enrolled in the study. Briefly, 3D facial images of the patients were acquired using a 3D scanner device before and after the surgery. Fourteen items were measured by direct anthropometry and 3D image software. The accuracy and precision of the 3D system were assessed by comparative analysis. The anthropometric data obtained using the 3D method were in agreement with the direct anthropometry measurements. All data calculated by the software were 'highly reliable' or 'reliable', as defined in the literature. The localisation of four landmarks was not consistent in repeated experiments of inter-observer reliability in preoperative images (P<0.05), while the intra-observer reliability in both pre- and postoperative images was good (P>0.05). The structured light scanning system is proven to be a non-invasive, accurate and precise method in cleft lip anthropometry. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  19. Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery

    NASA Astrophysics Data System (ADS)

    Chung, Soyoung; Kim, Joojin; Hong, Helen

    2016-03-01

    During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.

  20. Dermatoscopic features of cutaneous non-facial non-acral lentiginous growth pattern melanomas

    PubMed Central

    Keir, Jeff

    2014-01-01

    Background: The dermatoscopic features of facial lentigo maligna (LM), facial lentigo maligna melanoma (LMM) and acral lentiginous melanoma (ALM) have been well described. This is the first description of the dermatoscopic appearance of a clinical series of cutaneous non-facial non-acral lentiginous growth pattern melanomas. Objective: To describe the dermatoscopic features of a series of cutaneous non-facial non-acral lentiginous growth pattern melanomas in an Australian skin cancer practice. Method: Single observer retrospective analysis of dermatoscopic images of a one-year series of cutaneous non-facial, non-acral melanomas reported as having a lentiginous growth pattern detected in an open access primary care skin cancer clinic in Australia. Lesions were scored for presence of classical criteria for facial LM; modified pattern analysis (“Chaos and Clues”) criteria; and the presence of two novel criteria: a lentigo-like pigment pattern lacking a lentigo-like border, and large polygons. Results: 20 melanomas occurring in 14 female and 6 male patients were included. Average patient age was 64 years (range: 44–83). Lesion distribution was: trunk 35%; upper limb 40%; and lower limb 25%. The incidences of criteria identified were: asymmetry of color or pattern (100%); lentigo-like pigment pattern lacking a lentigo-like border (90%); asymmetrically pigmented follicular openings (APFO’s) (70%); grey blue structures (70%); large polygons (45%); eccentric structureless area (15%); bright white lines (5%). 20% of the lesions had only the novel criteria and/or APFO’s. Limitations: Single observer, single center retrospective study. Conclusions: Cutaneous non-facial non-acral melanomas with a lentiginous growth pattern may have none or very few traditional criteria for the diagnosis of melanoma. Criteria that are logically expected in lesions with a lentiginous growth pattern (lentigo-like pigment pattern lacking a lentigo-like border, APFO’s) and the novel criterion of large polygons may be useful in increasing sensitivity and specificity of diagnosis of these lesions. Further study is required to establish the significance of these observations. PMID:24520520

  1. Dermatoscopic features of cutaneous non-facial non-acral lentiginous growth pattern melanomas.

    PubMed

    Keir, Jeff

    2014-01-01

    The dermatoscopic features of facial lentigo maligna (LM), facial lentigo maligna melanoma (LMM) and acral lentiginous melanoma (ALM) have been well described. This is the first description of the dermatoscopic appearance of a clinical series of cutaneous non-facial non-acral lentiginous growth pattern melanomas. To describe the dermatoscopic features of a series of cutaneous non-facial non-acral lentiginous growth pattern melanomas in an Australian skin cancer practice. Single observer retrospective analysis of dermatoscopic images of a one-year series of cutaneous non-facial, non-acral melanomas reported as having a lentiginous growth pattern detected in an open access primary care skin cancer clinic in Australia. Lesions were scored for presence of classical criteria for facial LM; modified pattern analysis ("Chaos and Clues") criteria; and the presence of two novel criteria: a lentigo-like pigment pattern lacking a lentigo-like border, and large polygons. 20 melanomas occurring in 14 female and 6 male patients were included. Average patient age was 64 years (range: 44-83). Lesion distribution was: trunk 35%; upper limb 40%; and lower limb 25%. The incidences of criteria identified were: asymmetry of color or pattern (100%); lentigo-like pigment pattern lacking a lentigo-like border (90%); asymmetrically pigmented follicular openings (APFO's) (70%); grey blue structures (70%); large polygons (45%); eccentric structureless area (15%); bright white lines (5%). 20% of the lesions had only the novel criteria and/or APFO's. Single observer, single center retrospective study. Cutaneous non-facial non-acral melanomas with a lentiginous growth pattern may have none or very few traditional criteria for the diagnosis of melanoma. Criteria that are logically expected in lesions with a lentiginous growth pattern (lentigo-like pigment pattern lacking a lentigo-like border, APFO's) and the novel criterion of large polygons may be useful in increasing sensitivity and specificity of diagnosis of these lesions. Further study is required to establish the significance of these observations.

  2. Human (Homo sapiens) facial attractiveness in relation to skin texture and color.

    PubMed

    Fink, B; Grammer, K; Thornhill, R

    2001-03-01

    The notion that surface texture may provide important information about the geometry of visible surfaces has attracted considerable attention for a long time. The present study shows that skin texture plays a significant role in the judgment of female facial beauty. Following research in clinical dermatology, the authors developed a computer program that implemented an algorithm based on co-occurrence matrices for the analysis of facial skin texture. Homogeneity and contrast features as well as color parameters were extracted out of stimulus faces. Attractiveness ratings of the images made by male participants relate positively to parameters of skin homogeneity. The authors propose that skin texture is a cue to fertility and health. In contrast to some previous studies, the authors found that dark skin, not light skin, was rated as most attractive.

  3. Biometric identification based on novel frequency domain facial asymmetry measures

    NASA Astrophysics Data System (ADS)

    Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-03-01

    In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.

  4. Under pressure: progressively enlarging facial mass following high-pressure paint injection injury.

    PubMed

    Mushtaq, Jameel; Walker, Abigail; Hunter, Ben

    2016-01-19

    High-pressure paint injection injuries are relatively rare industrial accidents and almost exclusively occur on the non-dominant hand. A rarely documented complication of these injuries is the formation of a foreign body granuloma. We report a case of a 33-year-old man presenting with extensive facial scarring and progressive right paranasal swelling 7 years after a high-pressure paint injury. After imaging investigations, an excision of the mass and revision of scarring was performed. Access to the mass was gained indirectly through existing scarring over the nose to ensure an aesthetic result. Histological analysis revealed a florid granulomatous foreign body reaction to retained paint. To the best of our knowledge, this is the first reported case of a facial high-pressure paint injury with consequent formation of a foreign body granuloma. 2016 BMJ Publishing Group Ltd.

  5. A facial expression image database and norm for Asian population: a preliminary report

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  6. Automatic image assessment from facial attributes

    NASA Astrophysics Data System (ADS)

    Ptucha, Raymond; Kloosterman, David; Mittelstaedt, Brian; Loui, Alexander

    2013-03-01

    Personal consumer photography collections often contain photos captured by numerous devices stored both locally and via online services. The task of gathering, organizing, and assembling still and video assets in preparation for sharing with others can be quite challenging. Current commercial photobook applications are mostly manual-based requiring significant user interactions. To assist the consumer in organizing these assets, we propose an automatic method to assign a fitness score to each asset, whereby the top scoring assets are used for product creation. Our method uses cues extracted from analyzing pixel data, metadata embedded in the file, as well as ancillary tags or online comments. When a face occurs in an image, its features have a dominating influence on both aesthetic and compositional properties of the displayed image. As such, this paper will emphasize the contributions faces have on affecting the overall fitness score of an image. To understand consumer preference, we conducted a psychophysical study that spanned 27 judges, 5,598 faces, and 2,550 images. Preferences on a per-face and per-image basis were independently gathered to train our classifiers. We describe how to use machine learning techniques to merge differing facial attributes into a single classifier. Our novel methods of facial weighting, fusion of facial attributes, and dimensionality reduction produce stateof- the-art results suitable for commercial applications.

  7. Field test studies of our infrared-based human temperature screening system embedded with a parallel measurement approach

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun; Chaitavon, Kosom

    2009-07-01

    This paper introduces a parallel measurement approach for fast infrared-based human temperature screening suitable for use in a large public area. Our key idea is based on the combination of simple image processing algorithms, infrared technology, and human flow management. With this multidisciplinary concept, we arrange as many people as possible in a two-dimensional space in front of a thermal imaging camera and then highlight all human facial areas through simple image filtering, image morphological, and particle analysis processes. In this way, an individual's face in live thermal image can be located and the maximum facial skin temperature can be monitored and displayed. Our experiment shows a measured 1 ms processing time in highlighting all human face areas. With a thermal imaging camera having an FOV lens of 24° × 18° and 320 × 240 active pixels, the maximum facial skin temperatures from three people's faces located at 1.3 m from the camera can also be simultaneously monitored and displayed in a measured rate of 31 fps, limited by the looping process in determining coordinates of all faces. For our 3-day test under the ambient temperature of 24-30 °C, 57-72% relative humidity, and weak wind from the outside hospital building, hyperthermic patients can be identified with 100% sensitivity and 36.4% specificity when the temperature threshold level and the offset temperature value are appropriately chosen. Appropriately locating our system away from the building doors, air conditioners and electric fans in order to eliminate wind blow coming toward the camera lens can significantly help improve our system specificity.

  8. Treatment of Previously Treated Facial Capillary Malformations: Results of Single-Center Retrospective Objective 3-Dimensional Analysis of the Efficacy of Large Spot 532 nm Lasers.

    PubMed

    Kwiek, Bartłomiej; Ambroziak, Marcin; Osipowicz, Katarzyna; Kowalewski, Cezary; Rożalski, Michał

    2018-06-01

    Current treatment of facial capillary malformations (CM) has limited efficacy. To assess the efficacy of large spot 532 nm lasers for the treatment of previously treated facial CM with the use of 3-dimensional (3D) image analysis. Forty-three white patients aged 6 to 59 were included in this study. Patients had 3D photography performed before and after treatment with a 532 nm Nd:YAG laser with large spot and contact cooling. Objective analysis of percentage improvement based on 3D digital assessment of combined color and area improvement (global clearance effect [GCE]) were performed. The median maximal improvement achieved during the treatment (GCE) was 59.1%. The mean number of laser procedures required to achieve this improvement was 6.2 (range 1-16). Improvement of minimum 25% (GCE25) was achieved by 88.4% of patients, a minimum of 50% (GCE50) by 61.1%, a minimum of 75% (GCE75) by 25.6%, and a minimum of 90% (GCE90) by 4.6%. Patients previously treated with pulsed dye lasers had a significantly less response than those treated with other modalities (GCE 37.3% vs 61.8%, respectively). A large spot 532 nm laser is effective in previously treated patients with facial CM.

  9. Middle ear osteoma causing progressive facial nerve weakness: a case report.

    PubMed

    Curtis, Kate; Bance, Manohar; Carter, Michael; Hong, Paul

    2014-09-18

    Facial nerve weakness is most commonly due to Bell's palsy or cerebrovascular accidents. Rarely, middle ear tumor presents with facial nerve dysfunction. We report a very unusual case of middle ear osteoma in a 49-year-old Caucasian woman causing progressive facial nerve deficit. A subtle middle ear lesion was observed on otoscopy and computed tomographic images demonstrated an osseous middle ear tumor. Complete surgical excision resulted in the partial recovery of facial nerve function. Facial nerve dysfunction is rarely caused by middle ear tumors. The weakness is typically due to a compressive effect on the middle ear portion of the facial nerve. Early recognition is crucial since removal of these lesions may lead to the recuperation of facial nerve function.

  10. The Online Identity Formation of the Institution of Higher Education: Analysis of Power Relations and Subject Positions

    ERIC Educational Resources Information Center

    Massie, Keith R.

    2011-01-01

    The current study examined over 3000 visual images on the homepages of 234 National University to determine how power relations are depicted. Using a hybrid methodology of grounded theory, critical discursive analysis, and facial prominence scoring, the work culminates in a theory: The (Im)Balanced Theory of College Identity Formation Online. The…

  11. Facial skin color measurement based on camera colorimetric characterization

    NASA Astrophysics Data System (ADS)

    Yang, Boquan; Zhou, Changhe; Wang, Shaoqing; Fan, Xin; Li, Chao

    2016-10-01

    The objective measurement of facial skin color and its variance is of great significance as much information can be obtained from it. In this paper, we developed a new skin color measurement procedure which includes following parts: first, a new skin tone color checker made of pantone Skin Tone Color Checker was designed for camera colorimetric characterization; second, the chromaticity of light source was estimated via a new scene illumination estimation method considering several previous algorithms; third, chromatic adaption was used to convert the input facial image into output facial image which appears taken under canonical light; finally the validity and accuracy of our method was verified by comparing the results obtained by our procedure with these by spectrophotometer.

  12. Facial nerve hemangioma: a rare case involving the vertical segment.

    PubMed

    Ahmadi, Neda; Newkirk, Kenneth; Kim, H Jeffrey

    2013-02-01

    This case report and literature review reports on a rare case of facial nerve hemangioma (FNH) involving the vertical facial nerve (FN) segment, and discusses the clinical presentation, imaging, pathogenesis, and management of these rare lesions. A 53-year-old male presented with a 10-year history of right hemifacial twitching and progressive facial paresis (House-Brackmann grading score V/VI). The computed tomography and magnetic resonance imaging studies confirmed an expansile lesion along the vertical FN segment. Excision and histopathologic examination demonstrated FNH. FNHs involving the vertical FN segment are extremely rare. Despite being rare lesions, we believe that familiarity with the presentation and management of FNHs are imperative. Laryngoscope, 2012. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  13. Quantitative Magnetic Resonance Imaging Volumetry of Facial Muscles in Healthy Patients with Facial Palsy

    PubMed Central

    Volk, Gerd F.; Karamyan, Inna; Klingner, Carsten M.; Reichenbach, Jürgen R.

    2014-01-01

    Background: Magnetic resonance imaging (MRI) has not yet been established systematically to detect structural muscular changes after facial nerve lesion. The purpose of this pilot study was to investigate quantitative assessment of MRI muscle volume data for facial muscles. Methods: Ten healthy subjects and 5 patients with facial palsy were recruited. Using manual or semiautomatic segmentation of 3T MRI, volume measurements were performed for the frontal, procerus, risorius, corrugator supercilii, orbicularis oculi, nasalis, zygomaticus major, zygomaticus minor, levator labii superioris, orbicularis oris, depressor anguli oris, depressor labii inferioris, and mentalis, as well as for the masseter and temporalis as masticatory muscles for control. Results: All muscles except the frontal (identification in 4/10 volunteers), procerus (4/10), risorius (6/10), and zygomaticus minor (8/10) were identified in all volunteers. Sex or age effects were not seen (all P > 0.05). There was no facial asymmetry with exception of the zygomaticus major (larger on the left side; P = 0.012). The exploratory examination of 5 patients revealed considerably smaller muscle volumes on the palsy side 2 months after facial injury. One patient with chronic palsy showed substantial muscle volume decrease, which also occurred in another patient with incomplete chronic palsy restricted to the involved facial area. Facial nerve reconstruction led to mixed results of decreased but also increased muscle volumes on the palsy side compared with the healthy side. Conclusions: First systematic quantitative MRI volume measures of 5 different clinical presentations of facial paralysis are provided. PMID:25289366

  14. Automatically Log Off Upon Disappearance of Facial Image

    DTIC Science & Technology

    2005-03-01

    log off a PC when the user’s face disappears for an adjustable time interval. Among the fundamental technologies of biometrics, facial recognition is... facial recognition products. In this report, a brief overview of face detection technologies is provided. The particular neural network-based face...ensure that the user logging onto the system is the same person. Among the fundamental technologies of biometrics, facial recognition is the only

  15. Deep neural network using color and synthesized three-dimensional shape for face recognition

    NASA Astrophysics Data System (ADS)

    Rhee, Seon-Min; Yoo, ByungIn; Han, Jae-Joon; Hwang, Wonjun

    2017-03-01

    We present an approach for face recognition using synthesized three-dimensional (3-D) shape information together with two-dimensional (2-D) color in a deep convolutional neural network (DCNN). As 3-D facial shape is hardly affected by the extrinsic 2-D texture changes caused by illumination, make-up, and occlusions, it could provide more reliable complementary features in harmony with the 2-D color feature in face recognition. Unlike other approaches that use 3-D shape information with the help of an additional depth sensor, our approach generates a personalized 3-D face model by using only face landmarks in the 2-D input image. Using the personalized 3-D face model, we generate a frontalized 2-D color facial image as well as 3-D facial images (e.g., a depth image and a normal image). In our DCNN, we first feed 2-D and 3-D facial images into independent convolutional layers, where the low-level kernels are successfully learned according to their own characteristics. Then, we merge them and feed into higher-level layers under a single deep neural network. Our proposed approach is evaluated with labeled faces in the wild dataset and the results show that the error rate of the verification rate at false acceptance rate 1% is improved by up to 32.1% compared with the baseline where only a 2-D color image is used.

  16. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans

    PubMed Central

    Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred

    2012-01-01

    Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347

  17. A unified probabilistic framework for spontaneous facial action modeling and understanding.

    PubMed

    Tong, Yan; Chen, Jixu; Ji, Qiang

    2010-02-01

    Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.

  18. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    PubMed

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  19. A 3D analysis of Caucasian and African American facial morphologies in a US population.

    PubMed

    Talbert, Leslie; Kau, Chung How; Christou, Terpsithea; Vlachos, Christos; Souccar, Nada

    2014-03-01

    This study aimed to compare facial morphologies of an adult African-American population to an adult Caucasian-American population using three-dimensional (3D) surface imaging. The images were captured using a stereophotogrammetric system (3dMDface(TM) system). Subjects were aged 19-30 years, with normal body mass index and no gross craniofacial anomalies. Images were aligned and combined using RF6 Plus Pack 2 software to produce a male and female facial average for each population. The averages were superimposed and the differences were assessed. The most distinct differences were in the forehead, alar base and perioricular regions. The average difference between African-American and Caucasian-American females was 1·18±0·98 mm. The African-American females had a broader face, wider alar base and more protrusive lips. The Caucasian-American females had a more prominent chin, malar region and lower forehead. The average difference between African-American and Caucasian-American males was 1·11±1·04 mm. The African-American males had a more prominent upper forehead and periocular region, wider alar base and more protrusive lips. No notable difference occurred between chin points of the two male populations. Average faces were created from 3D photographs, and the facial morphological differences between populations and genders were compared. African-American males had a more prominent upper forehead and periocular region, wider alar base and more protrusive lips. Caucasian-American males showed a more prominent nasal tip and malar area. African-American females had broader face, wider alar base and more protrusive lips. Caucasian-American females showed a more prominent chin point, malar region and lower forehead.

  20. Face-selective regions differ in their ability to classify facial expressions

    PubMed Central

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-01-01

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: The amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  1. Face-selective regions differ in their ability to classify facial expressions.

    PubMed

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  2. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins.

    PubMed

    Vuollo, Ville; Sidlauskas, Mantas; Sidlauskas, Antanas; Harila, Virpi; Salomskiene, Loreta; Zhurov, Alexei; Holmström, Lasse; Pirttiniemi, Pertti; Heikkinen, Tuomo

    2015-06-01

    The aim of this study was to compare facial 3D analysis to DNA testing in twin zygosity determinations. Facial 3D images of 106 pairs of young adult Lithuanian twins were taken with a stereophotogrammetric device (3dMD, Atlanta, Georgia) and zygosity was determined according to similarity of facial form. Statistical pattern recognition methodology was used for classification. The results showed that in 75% to 90% of the cases, zygosity determinations were similar to DNA-based results. There were 81 different classification scenarios, including 3 groups, 3 features, 3 different scaling methods, and 3 threshold levels. It appeared that coincidence with 0.5 mm tolerance is the most suitable feature for classification. Also, leaving out scaling improves results in most cases. Scaling was expected to equalize the magnitude of differences and therefore lead to better recognition performance. Still, better classification features and a more effective scaling method or classification in different facial areas could further improve the results. In most of the cases, male pair zygosity recognition was at a higher level compared with females. Erroneously classified twin pairs appear to be obvious outliers in the sample. In particular, faces of young dizygotic (DZ) twins may be so similar that it is very hard to define a feature that would help classify the pair as DZ. Correspondingly, monozygotic (MZ) twins may have faces with quite different shapes. Such anomalous twin pairs are interesting exceptions, but they form a considerable portion in both zygosity groups.

  3. Serial dependence in the perception of attractiveness.

    PubMed

    Xia, Ye; Leib, Allison Yamanashi; Whitney, David

    2016-12-01

    The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.

  4. Differential roles of low and high spatial frequency content in abnormal facial emotion perception in schizophrenia.

    PubMed

    McBain, Ryan; Norton, Daniel; Chen, Yue

    2010-09-01

    While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.

  5. Three-dimensional printing for restoration of the donor face: A new digital technique tested and used in the first facial allotransplantation patient in Finland.

    PubMed

    Mäkitie, A A; Salmi, M; Lindford, A; Tuomi, J; Lassus, P

    2016-12-01

    Prosthetic mask restoration of the donor face is essential in current facial transplant protocols. The aim was to develop a new three-dimensional (3D) printing (additive manufacturing; AM) process for the production of a donor face mask that fulfilled the requirements for facial restoration after facial harvest. A digital image of a single test person's face was obtained in a standardized setting and subjected to three different image processing techniques. These data were used for the 3D modeling and printing of a donor face mask. The process was also tested in a cadaver setting and ultimately used clinically in a donor patient after facial allograft harvest. and Conclusions: All the three developed and tested techniques enabled the 3D printing of a custom-made face mask in a timely manner that is almost an exact replica of the donor patient's face. This technique was successfully used in a facial allotransplantation donor patient. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  6. Retrospective case series of the imaging findings of facial nerve hemangioma.

    PubMed

    Yue, Yunlong; Jin, Yanfang; Yang, Bentao; Yuan, Hui; Li, Jiandong; Wang, Zhenchang

    2015-09-01

    The aim was to compare high-resolution computed tomography (HRCT) and thin-section magnetic resonance imaging (MRI) findings of facial nerve hemangioma. The HRCT and MRI characteristics of 17 facial nerve hemangiomas diagnosed between 2006 and 2013 were retrospectively analyzed. All patients included in the study suffered from a space-occupying lesion of soft tissues at the geniculate ganglion fossa. Affected nerve was compared for size and shape with the contralateral unaffected nerve. HRCT showed irregular expansion and broadening of the facial nerve canal, damage of the bone wall and destruction of adjacent bone, with "point"-like or "needle"-like calcifications in 14 cases. The average CT value was 320.9 ± 141.8 Hu. Fourteen patients had a widened labyrinthine segment; 6/17 had a tympanic segment widening; 2/17 had a greater superficial petrosal nerve canal involvement, and 2/17 had an affected internal auditory canal (IAC) segment. On MRI, all lesions were significantly enhanced due to high blood supply. Using 2D FSE T2WI, the lesion detection rate was 82.4 % (14/17). 3D fast imaging employing steady-state acquisition (3D FIESTA) revealed the lesions in all patients. HRCT showed that the average number of involved segments in the facial nerve canal was 2.41, while MRI revealed an average of 2.70 segments (P < 0.05). HRCT and MR findings of facial nerve hemangioma were typical, revealing irregular masses growing along the facial nerve canal, with calcifications and rich blood supply. Thin-section enhanced MRI was more accurate in lesion detection and assessment compared with HRCT.

  7. Correlation between facial morphology and gene polymorphisms in the Uygur youth population.

    PubMed

    He, Huiyu; Mi, Xue; Zhang, Jiayu; Zhang, Qin; Yao, Yuan; Zhang, Xu; Xiao, Feng; Zhao, Chunping; Zheng, Shutao

    2017-04-25

    Human facial morphology varies considerably among individuals and can be influenced by gene polymorphisms. We explored the effects of single nucleotide polymorphisms (SNPs) on facial features in the Uygur youth population of the Kashi area in Xinjiang, China. Saliva samples were collected from 578 volunteers, and 10 SNPs previously associated with variations in facial physiognomy were genotyped. In parallel, 3D images of the subjects' faces were obtained using grating facial scanning technology. After delimitation of 15 salient landmarks, the correlation between SNPs and the distances between facial landmark pairs was assessed. Analysis of variance revealed that ENPP1 rs7754561 polymorphism was significantly associated with RAla-RLipCn and RLipCn-Sbn linear distances (p = 0.044 and p = 0.012, respectively) as well as RLipCn-Stm curve distance (p = 0.042). The GHR rs6180 polymorphism correlated with RLipCn-Stm linear distance (p = 0.04), while the GHR rs6184 polymorphism correlated with RLipCn-ULipP curve distance (p = 0.047). The FGFR1 rs4647905 polymorphism was associated with LLipCn-Nsn linear distance (p = 0.042). These results reveal that ENPP1 and FGFR1 influence lower anterior face height, the distance from the upper lip to the nasal floor, and lip shape. FGFR1 also influences the lower anterior face height, while GHR is associated with the length and width of the lip.

  8. Cerebral, facial, and orbital involvement in Erdheim-Chester disease: CT and MR imaging findings.

    PubMed

    Drier, Aurélie; Haroche, Julien; Savatovsky, Julien; Godenèche, Gaelle; Dormont, Didier; Chiras, Jacques; Amoura, Zahir; Bonneville, Fabrice

    2010-05-01

    To retrospectively review the brain magnetic resonance (MR) imaging and computed tomographic (CT) findings in patients with Erdheim-Chester disease (ECD). The ethics committee required neither institutional review board approval nor informed patient consent for retrospective analyses of the patients' medical records and imaging data. The patients' medical files were retrospectively reviewed in accordance with human subject research protocols. Three neuroradiologists in consensus analyzed the signal intensity, location, size, number, and gadolinium uptake of lesions detected on brain MR images obtained in 33 patients with biopsy-proved ECD. Thirty patients had intracranial, facial bone, and/or orbital involvement, and three had normal neurologic imaging findings. The hypothalamic-pituitary axis was involved in 16 (53%) of the 30 patients, with six (20%) cases of micronodular or nodular masses of the infundibular stalk. Meningeal lesions were observed in seven (23%) patients. Three (10%) patients had bilateral symmetric T2 high signal intensity in the dentate nucleus areas, and five (17%) had multiple intraaxial enhancing masses. Striking intracranial periarterial infiltration was observed in three (10%) patients. Another patient (3%) had a lesion in the lumen of the superior sagittal sinus. Nine (30%) patients had orbital involvement. Twenty-four (80%) patients had osteosclerosis of the facial and/or skull bones. At least two anatomic sites were involved in two-thirds (n = 20) of the patients. Osteosclerosis of the facial bones associated with orbital masses and either meningeal or infundibular stalk masses was seen in eight (27%) patients. Lesions of the brain, meninges, facial bones, and orbits are frequently observed and should be systematically sought on the brain MR and CT images obtained in patients with ECD, even if these patients are asymptomatic. Careful attention should be directed to the periarterial environment.

  9. Are Portable Stereophotogrammetric Devices Reliable in Facial Imaging? A Validation Study of VECTRA H1 Device.

    PubMed

    Gibelli, Daniele; Pucciarelli, Valentina; Cappella, Annalisa; Dolci, Claudia; Sforza, Chiarella

    2018-01-31

    Modern 3-dimensional (3D) image acquisition systems represent a crucial technologic development in facial anatomy because of their accuracy and precision. The recently introduced portable devices can improve facial databases by increasing the number of applications. In the present study, the VECTRA H1 portable stereophotogrammetric device was validated to verify its applicability to 3D facial analysis. Fifty volunteers underwent 4 facial scans using portable VECTRA H1 and static VECTRA M3 devices (2 for each instrument). Repeatability of linear, angular, surface area, and volume measurements was verified within the device and between devices using the Bland-Altman test and the calculation of absolute and relative technical errors of measurement (TEM and rTEM, respectively). In addition, the 2 scans obtained by the same device and the 2 scans obtained by different devices were registered and superimposed to calculate the root mean square (RMS; point-to-point) distance between the 2 surfaces. Most linear, angular, and surface area measurements had high repeatability in M3 versus M3, H1 versus H1, and M3 versus H1 comparisons (range, 82.2 to 98.7%; TEM range, 0.3 to 2.0 mm, 0.4° to 1.8°; rTEM range, 0.2 to 3.1%). In contrast, volumes and RMS distances showed evident differences in M3 versus M3 and H1 versus H1 comparisons and reached the maximum when scans from the 2 different devices were compared. The portable VECTRA H1 device proved reliable for assessing linear measurements, angles, and surface areas; conversely, the influence of involuntary facial movements on volumes and RMS distances was more important compared with the static device. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  10. Automatic forensic face recognition from digital images.

    PubMed

    Peacock, C; Goode, A; Brett, A

    2004-01-01

    Digital image evidence is now widely available from criminal investigations and surveillance operations, often captured by security and surveillance CCTV. This has resulted in a growing demand from law enforcement agencies for automatic person-recognition based on image data. In forensic science, a fundamental requirement for such automatic face recognition is to evaluate the weight that can justifiably be attached to this recognition evidence in a scientific framework. This paper describes a pilot study carried out by the Forensic Science Service (UK) which explores the use of digital facial images in forensic investigation. For the purpose of the experiment a specific software package was chosen (Image Metrics Optasia). The paper does not describe the techniques used by the software to reach its decision of probabilistic matches to facial images, but accepts the output of the software as though it were a 'black box'. In this way, the paper lays a foundation for how face recognition systems can be compared in a forensic framework. The aim of the paper is to explore how reliably and under what conditions digital facial images can be presented in evidence.

  11. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  12. Brain anomalies in velo-cardio-facial syndrome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitnick, R.J.; Bello, J.A.; Shprintzen, R.J.

    Magnetic resonance imaging of the brain in 11 consecutively referred patients with velo-cardio-facial syndrome (VCF) showed anomalies in nine cases including small vermis, cysts adjacent to the frontal horns, and small posterior fossa. Focal signal hyperintensities in the white matter on long TR images were also noted. The nine patients showed a variety of behavioral abnormalities including mild development delay, learning disabilities, and characteristic personality traits typical of this common multiple anomaly syndrome which has been related to a microdeletion at 22q11. Analysis of the behavorial findings showed no specific pattern related to the brain anomalies, and the patients withmore » VCF who did not have detectable brain lesions also had behavioral abnormalities consistent with VCF. The significance of the lesions is not yet known, but the high prevalence of anomalies in this sample suggests that structural brain abnormalities are probably common in VCF. 25 refs.« less

  13. Fast 3D NIR systems for facial measurement and lip-reading

    NASA Astrophysics Data System (ADS)

    Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther

    2017-05-01

    Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.

  14. [Establishment of the database of the 3D facial models for the plastic surgery based on network].

    PubMed

    Liu, Zhe; Zhang, Hai-Lin; Zhang, Zheng-Guo; Qiao, Qun

    2008-07-01

    To collect the three-dimensional (3D) facial data of 30 facial deformity patients by the 3D scanner and establish a professional database based on Internet. It can be helpful for the clinical intervention. The primitive point data of face topography were collected by the 3D scanner. Then the 3D point cloud was edited by reverse engineering software to reconstruct the 3D model of the face. The database system was divided into three parts, including basic information, disease information and surgery information. The programming language of the web system is Java. The linkages between every table of the database are credibility. The query operation and the data mining are convenient. The users can visit the database via the Internet and use the image analysis system to observe the 3D facial models interactively. In this paper we presented a database and a web system adapt to the plastic surgery of human face. It can be used both in clinic and in basic research.

  15. Laser Doppler imaging of cutaneous blood flow through transparent face masks: a necessary preamble to computer-controlled rapid prototyping fabrication with submillimeter precision.

    PubMed

    Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H

    2008-01-01

    A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P < .5). High valid pixel rate laser Doppler imager flow data can be obtained through transparent face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.

  16. A practical review of the muscles of facial mimicry with special emphasis on the superficial musculoaponeurotic system.

    PubMed

    Hutto, Justin R; Vattoth, Surjith

    2015-01-01

    In this article, we elaborate a practical approach to superficial facial anatomy enabling easy identification of the facial mimic muscles by classifying them according to their shared common insertion sites. The facial mimic muscles are often difficult to identify on imaging. By tracing them from their common group insertion sites back to their individual origins as well as understanding key anatomic relationships, radiologists can more accurately identify these muscles.

  17. Augmentation of linear facial anthropometrics through modern morphometrics: a facial convexity example.

    PubMed

    Wei, R; Claes, P; Walters, M; Wholley, C; Clement, J G

    2011-06-01

    The facial region has traditionally been quantified using linear anthropometrics. These are well established in dentistry, but require expertise to be used effectively. The aim of this study was to augment the utility of linear anthropometrics by applying them in conjunction with modern 3-D morphometrics. Facial images of 75 males and 94 females aged 18-25 years with self-reported Caucasian ancestry were used. An anthropometric mask was applied to establish corresponding quasi-landmarks on the images in the dataset. A statistical face-space, encoding shape covariation, was established. The facial median plane was extracted facilitating both manual and automated indication of commonly used midline landmarks. From both indications, facial convexity angles were calculated and compared. The angles were related to the face-space using a regression based pathway enabling the visualization of facial form associated with convexity variation. Good agreement between the manual and automated angles was found (Pearson correlation: 0.9478-0.9474, Dahlberg root mean squared error: 1.15°-1.24°). The population mean angle was 166.59°-166.29° (SD 5.09°-5.2°) for males-females. The angle-pathway provided valuable feedback. Linear facial anthropometrics can be extended when used in combination with a face-space derived from 3-D scans and the exploration of property pathways inferred in a statistically verifiable way. © 2011 Australian Dental Association.

  18. Comparison of facial morphologies between adult Chinese and Houstonian Caucasian populations using three-dimensional imaging.

    PubMed

    Wirthlin, J; Kau, C H; English, J D; Pan, F; Zhou, H

    2013-09-01

    The objective of this study was to compare the facial morphologies of an adult Chinese population to a Houstonian white population. Three-dimensional (3D) images were acquired via a commercially available stereophotogrammetric camera system, 3dMDface™. Using the system, 100 subjects from a Houstonian population and 71 subjects from a Chinese population were photographed. A complex mathematical algorithm was performed to generate a composite facial average (one for males and one for females) for each subgroup. The computer-generated facial averages were then superimposed based on a previously validated superimposition method. The facial averages were evaluated for differences. Distinct facial differences were evident between the subgroups evaluated. These areas included the nasal tip, the peri-orbital area, the malar process, the labial region, the forehead, and the chin. Overall, the mean facial difference between the Chinese and Houstonian female averages was 2.73±2.20mm, while the difference between the Chinese and Houstonian males was 2.83±2.20mm. The percent similarity for the female population pairings and male population pairings were 10.45% and 12.13%, respectively. The average adult Chinese and Houstonian faces possess distinct differences. Different populations and ethnicities have different facial features and averages that should be considered in the planning of treatment. Copyright © 2013 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  20. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  1. A Simple Instrument Designed to Provide Consistent Digital Facial Images in Dermatology

    PubMed Central

    Nirmal, Balakrishnan; Pai, Sathish B; Sripathi, Handattu

    2013-01-01

    Photography has proven to be a valuable tool in the field of dermatology. The major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups. Combining digital photography with image processing software analysis brings consistency in tracking serial images. Digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning, camera angles and distance. It is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images. PMID:23723469

  2. A simple instrument designed to provide consistent digital facial images in dermatology.

    PubMed

    Nirmal, Balakrishnan; Pai, Sathish B; Sripathi, Handattu

    2013-05-01

    Photography has proven to be a valuable tool in the field of dermatology. The major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups. Combining digital photography with image processing software analysis brings consistency in tracking serial images. Digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning, camera angles and distance. It is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images.

  3. Professional assessment of facial profile attractiveness.

    PubMed

    Soh, Jen; Chew, Ming Tak; Wong, Hwee Bee

    2005-08-01

    The aim of this study was to compare the assessments of Chinese facial profile attractiveness by orthodontists and oral surgeons. The sample comprised 31 dental professionals (20 orthodontists, 11 oral surgeons) in an Asian community. Facial profile photographs and lateral cephalometric radiographs of 2 Chinese adults (1 man, 1 woman) with normal profiles, Class I incisor relationships, and Class I skeletal patterns were digitized. The digital images were modified by altering cephalometric skeletal and dental hard tissue Chinese normative values in increments of 2 standard deviations in the anteroposterior plane to obtain 7 facial profiles for each sex. The images were bimaxillary protrusion, protrusive mandible, retrusive mandible, normal profile (Class I incisor with Class I skeletal pattern), retrusive maxilla, protrusive maxilla, and bimaxillary retrusion. The Mann-Whitney U test was used to determine professional differences in assessment. Multiple regression analysis was performed with age, professional status, sex, and number of years in practice as independent variables. A strong correlation was found in the profile assessment between orthodontists and oral surgeons. Normal and bimaxillary retrusive Chinese male and female profiles were judged to be highly attractive by orthodontists and oral surgeons. Chinese male and female profiles with protrusive mandibles were judged the least attractive. There was a difference in professional opinion about the most attractive male profile (P < .05), with orthodontists preferring a flatter profile and oral surgeons preferring a fuller normal Chinese profile. Sex of dental professionals and number of years in clinical practice were found to affect profile rankings.

  4. A comparative study of the effects of retinol and retinoic acid on histological, molecular, and clinical properties of human skin.

    PubMed

    Kong, Rong; Cui, Yilei; Fisher, Gary J; Wang, Xiaojuan; Chen, Yinbei; Schneider, Louise M; Majmudar, Gopa

    2016-03-01

    All-trans retinol, a precursor of retinoic acid, is an effective anti-aging treatment widely used in skin care products. In comparison, topical retinoic acid is believed to provide even greater anti-aging effects; however, there is limited research directly comparing the effects of retinol and retinoic acid on skin. In this study, we compare the effects of retinol and retinoic acid on skin structure and expression of skin function-related genes and proteins. We also examine the effect of retinol treatment on skin appearance. Skin histology was examined by H&E staining and in vivo confocal microscopy. Expression levels of skin genes and proteins were analyzed using RT-PCR and immunohistochemistry. The efficacy of a retinol formulation in improving skin appearance was assessed using digital image-based wrinkle analysis. Four weeks of retinoic acid and retinol treatments both increased epidermal thickness, and upregulated genes for collagen type 1 (COL1A1), and collagen type 3 (COL3A1) with corresponding increases in procollagen I and procollagen III protein expression. Facial image analysis showed a significant reduction in facial wrinkles following 12 weeks of retinol application. The results of this study demonstrate that topical application of retinol significantly affects both cellular and molecular properties of the epidermis and dermis, as shown by skin biopsy and noninvasive imaging analyses. Although the magnitude tends to be smaller, retinol induces similar changes in skin histology, and gene and protein expression as compared to retinoic acid application. These results were confirmed by the significant facial anti-aging effect observed in the retinol efficacy clinical study. © 2015 Wiley Periodicals, Inc.

  5. Skull (image)

    MedlinePlus

    The skull is anterior to the spinal column and is the bony structure that encases the brain. Its purpose ... the facial muscles. The two regions of the skull are the cranial and facial region. The cranial ...

  6. Hot or not? Thermal reactions to social contact.

    PubMed

    Hahn, Amanda C; Whitehead, Ross D; Albrecht, Marion; Lefevre, Carmen E; Perrett, David I

    2012-10-23

    Previous studies using thermal imaging have suggested that face and body temperature increase during periods of sexual arousal. Additionally, facial skin temperature changes are associated with other forms of emotional arousal, including fear and stress. This study investigated whether interpersonal social contact can elicit facial temperature changes. Study 1: infrared images were taken during a standardized interaction with a same- and opposite-sex experimenter using skin contact in a number of potentially high-intimate (face and chest) and low-intimate (arm and palm) locations. Facial skin temperatures significantly increased from baseline during the face and chest contact, and these temperature shifts were larger when contact was made by an opposite-sex experimenter. Study 2: the topography of facial temperature change was investigated in five regions: forehead, periorbital, nose, mouth and cheeks. Increased temperature in the periorbital, nose and mouth regions predicted overall facial temperature shifts to social contact. Our findings demonstrate skin temperature changes are a sensitive index of arousal during interpersonal interactions.

  7. Objective assessment of the contribution of dental esthetics and facial attractiveness in men via eye tracking.

    PubMed

    Baker, Robin S; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2018-04-01

    Recently, greater emphasis has been placed on smile esthetics in dentistry. Eye tracking has been used to objectively evaluate attention to the dentition (mouth) in female models with different levels of dental esthetics quantified by the aesthetic component of the Index of Orthodontic Treatment Need (IOTN). This has not been accomplished in men. Our objective was to determine the visual attention to the mouth in men with different levels of dental esthetics (IOTN levels) and background facial attractiveness, for both male and female raters, using eye tracking. Facial images of men rated as unattractive, average, and attractive were digitally manipulated and paired with validated oral images, IOTN levels 1 (no treatment need), 7 (borderline treatment need), and 10 (definite treatment need). Sixty-four raters meeting the inclusion criteria were included in the data analysis. Each rater was calibrated in the eye tracker and randomly viewed the composite images for 3 seconds, twice for reliability. Reliability was good or excellent (intraclass correlation coefficients, 0.6-0.9). Significant interactions were observed with factorial repeated-measures analysis of variance and the Tukey-Kramer method for density and duration of fixations in the interactions of model facial attractiveness by area of the face (P <0.0001, P <0.0001, respectively), dental esthetics (IOTN) by area of the face (P <0.0001, P <0.0001, respectively), and rater sex by area of the face (P = 0.0166, P = 0.0290, respectively). For area by facial attractiveness, the hierarchy of visual attention in unattractive and attractive models was eye, mouth, and nose, but for men of average attractiveness, it was mouth, eye, and nose. For dental esthetics by area, at IOTN 7, the mouth had significantly more visual attention than it did at IOTN 1 and significantly more than the nose. At IOTN 10, the mouth received significantly more attention than at IOTN 7 and surpassed the nose and eye. These findings were irrespective of facial attractiveness levels. For rater sex by area in visual density, women showed significantly more attention to the eyes than did men, and only men showed significantly more attention to the mouth over the nose. Visual attention to the mouth was the greatest in men of average facial attractiveness, irrespective of dental esthetics. In borderline dental esthetics (IOTN 7), the eye and mouth were statistically indistinguishable, but in the most unesthetic dental attractiveness level (IOTN 10), the mouth exceeded the eye. The most unesthetic malocclusion significantly attracted visual attention in men. Male and female raters showed differences in their visual attention to male faces. Laypersons gave significant visual attention to poor dental esthetics in men, irrespective of background attractiveness; this was counter to what was seen in women. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  8. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia.

    PubMed

    Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto

    2017-08-01

    Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.

  9. Differences in holistic processing do not explain cultural differences in the recognition of facial expression.

    PubMed

    Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J

    2017-12-01

    The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants' perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.

  10. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    PubMed Central

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others’ facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing. PMID:27390867

  11. Forming impressions: effects of facial expression and gender stereotypes.

    PubMed

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  12. The extraction and use of facial features in low bit-rate visual communication.

    PubMed

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  13. Skull anatomy (image)

    MedlinePlus

    The skull is anterior to the spinal column and is the bony structure that encases the brain. Its purpose ... the facial muscles. The two regions of the skull are the cranial and facial region. The cranial ...

  14. A study to evaluate the reliability of using two-dimensional photographs, three-dimensional images, and stereoscopic projected three-dimensional images for patient assessment.

    PubMed

    Zhu, S; Yang, Y; Khambay, B

    2017-03-01

    Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all P<0.05). For 75% of the raters, stereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  15. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  16. MR relaxometry for the facial ageing assessment: the preliminary study of the age dependency in the MR relaxometry parameters within the facial soft tissue.

    PubMed

    Watanabe, M; Buch, K; Fujita, A; Christiansen, C L; Jara, H; Sakai, O

    2015-01-01

    To investigate the location-specific tissue properties and age-related changes of the facial fat and facial muscles using quantitative MRI (qMRI) analysis of longitudinal magnetization (T1) and transverse magnetization (T2) values. 38 subjects (20 males and 18 females, 0.5-87 years old) were imaged with a mixed turbo-spin echo sequence at 1.5 T. T1 and T2 measurements were obtained within regions of interest in six facial fat regions including the buccal fat and subcutaneous cheek fat, four eyelid fat regions (lateral upper, medial upper, lateral lower and medial lower) and five facial muscles including the orbicularis oculi, orbicularis oris, buccinator, zygomaticus major and masseter muscles bilaterally. Within the zygomaticus major muscle, age-associated T1 decreases in females and T1 increases in males were observed in later life with an increase in T2 values with age. The orbicularis oculi muscles showed lower T1 and higher T2 values compared to the masseter, orbicularis oris and buccinator muscles, which demonstrated small age-related changes. The dramatic age-related changes were also observed in the eyelid fat regions, particularly within the lower eyelid fat; negative correlations with age in T1 values (p<0.0001 for age) and prominent positive correlation in T2 values in male subjects (p<0.0001 for male×age). Age-related changes were not observed in T2 values within the subcutaneous cheek fat. This study demonstrates proof of concept using T1 and T2 values to assess age-related changes of the facial soft tissues, demonstrating tissue-specific qMRI measurements and non-uniform ageing patterns within different regions of facial soft tissues.

  17. Impact of facial defect reconstruction on attractiveness and negative facial perception.

    PubMed

    Dey, Jacob K; Ishii, Masaru; Boahene, Kofi D O; Byrne, Patrick; Ishii, Lisa E

    2015-06-01

    Measure the impact of facial defect reconstruction on observer-graded attractiveness and negative facial perception. Prospective, randomized, controlled experiment. One hundred twenty casual observers viewed images of faces with defects of varying sizes and locations before and after reconstruction as well as normal comparison faces. Observers rated attractiveness, defect severity, and how disfiguring, bothersome, and important to repair they considered each face. Facial defects decreased attractiveness -2.26 (95% confidence interval [CI]: -2.45, -2.08) on a 10-point scale. Mixed effects linear regression showed this attractiveness penalty varied with defect size and location, with large and central defects generating the greatest penalty. Reconstructive surgery increased attractiveness 1.33 (95% CI: 1.18, 1.47), an improvement dependent upon size and location, restoring some defect categories to near normal ranges of attractiveness. Iterated principal factor analysis indicated the disfiguring, important to repair, bothersome, and severity variables were highly correlated and measured a common domain; thus, they were combined to create the disfigured, important to repair, bothersome, severity (DIBS) factor score, representing negative facial perception. The DIBS regression showed defect faces have a 1.5 standard deviation increase in negative perception (DIBS: 1.69, 95% CI: 1.61, 1.77) compared to normal faces, which decreased by a similar magnitude after surgery (DIBS: -1.44, 95% CI: -1.49, -1.38). These findings varied with defect size and location. Surgical reconstruction of facial defects increased attractiveness and decreased negative social facial perception, an impact that varied with defect size and location. These new social perception data add to the evidence base demonstrating the value of high-quality reconstructive surgery. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  18. Coding and quantification of a facial expression for pain in lambs.

    PubMed

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five human observers scored the images from Experiment II. Changes in facial action units were also quantified objectively by a researcher using image measurement software. In both experiments LGS scores were analyzed using a linear MIXED model to evaluate the effects of tail docking on observers' perception of facial expression changes. Kendall's Index of Concordance was used to measure reliability among observers. In Experiment I, human observers were able to use the LGS to differentiate docked lambs from control lambs. LGS scores significantly increased from before to after treatment in docked lambs but not control lambs. In Experiment II there was a significant increase in LGS scores after docking. This was coupled with changes in other validated indicators of pain after docking in the form of pain-related behaviour. Only two components, Mouth Features and Orbital Tightening, showed significant quantitative changes after docking. The direction of these changes agree with the description of these facial action units in the LGS. Restraint affected people's perceptions of pain as well as quantitative measures of LGS components. Freely moving lambs were scored lower using the LGS over both periods and had a significantly smaller eye aperture and smaller nose and ear angles than when they were held. Agreement among observers for LGS scores were fair overall (Experiment I: W=0.60; Experiment II: W=0.66). This preliminary study demonstrates changes in lamb facial expression associated with pain. The results of these experiments should be interpreted with caution due to low lamb numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    NASA Astrophysics Data System (ADS)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  20. Holistic face processing can inhibit recognition of forensic facial composites.

    PubMed

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  1. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    PubMed

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  2. Skin image retrieval using Gabor wavelet texture feature.

    PubMed

    Ou, X; Pan, W; Zhang, X; Xiao, P

    2016-12-01

    Skin imaging plays a key role in many clinical studies. We have used many skin imaging techniques, including the recently developed capacitive contact skin imaging based on fingerprint sensors. The aim of this study was to develop an effective skin image retrieval technique using Gabor wavelet transform, which can be used on different types of skin images, but with a special focus on skin capacitive contact images. Content-based image retrieval (CBIR) is a useful technology to retrieve stored images from database by supplying query images. In a typical CBIR, images are retrieved based on colour, shape, texture, etc. In this study, texture feature is used for retrieving skin images, and Gabor wavelet transform is used for texture feature description and extraction. The results show that the Gabor wavelet texture features can work efficiently on different types of skin images. Although Gabor wavelet transform is slower compared with other image retrieval techniques, such as principal component analysis (PCA) and grey-level co-occurrence matrix (GLCM), Gabor wavelet transform is the best for retrieving skin capacitive contact images and facial images with different orientations. Gabor wavelet transform can also work well on facial images with different expressions and skin cancer/disease images. We have developed an effective skin image retrieval method based on Gabor wavelet transform, that it is useful for retrieving different types of images, namely digital colour face images, digital colour skin cancer and skin disease images, and particularly greyscale skin capacitive contact images. Gabor wavelet transform can also be potentially useful for face recognition (with different orientation and expressions) and skin cancer/disease diagnosis. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  3. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    ERIC Educational Resources Information Center

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  4. Enhancement pattern of the normal facial nerve at 3.0 T temporal MRI.

    PubMed

    Hong, H S; Yi, B-H; Cha, J-G; Park, S-J; Kim, D H; Lee, H K; Lee, J-D

    2010-02-01

    The purpose of this study was to evaluate the enhancement pattern of the normal facial nerve at 3.0 T temporal MRI. We reviewed the medical records of 20 patients and evaluated 40 clinically normal facial nerves demonstrated by 3.0 T temporal MRI. The grade of enhancement of the facial nerve was visually scaled from 0 to 3. The patients comprised 11 men and 9 women, and the mean age was 39.7 years. The reasons for the MRI were sudden hearing loss (11 patients), Méniàre's disease (6) and tinnitus (7). Temporal MR scans were obtained by fluid-attenuated inversion-recovery (FLAIR) and diffusion-weighted imaging of the brain; three-dimensional (3D) fast imaging employing steady-state acquisition (FIESTA) images of the temporal bone with a 0.77 mm thickness, and pre-contrast and contrast-enhanced 3D spoiled gradient record acquisition in the steady state (SPGR) of the temporal bone with a 1 mm thickness, were obtained with 3.0 T MR scanning. 40 nerves (100%) were visibly enhanced along at least one segment of the facial nerve. The enhanced segments included the geniculate ganglion (77.5%), tympanic segment (37.5%) and mastoid segment (100%). Even the facial nerve in the internal auditory canal (15%) and labyrinthine segments (5%) showed mild enhancement. The use of high-resolution, high signal-to-noise ratio (with 3 T MRI), thin-section contrast-enhanced 3D SPGR sequences showed enhancement of the normal facial nerve along the whole course of the nerve; however, only mild enhancement was observed in areas associated with acute neuritis, namely the canalicular and labyrinthine segment.

  5. Perception of health from facial cues

    PubMed Central

    Henderson, Audrey J.; Holzleitner, Iris J.; Talamas, Sean N.

    2016-01-01

    Impressions of health are integral to social interactions, yet poorly understood. A review of the literature reveals multiple facial characteristics that potentially act as cues to health judgements. The cues vary in their stability across time: structural shape cues including symmetry and sexual dimorphism alter slowly across the lifespan and have been found to have weak links to actual health, but show inconsistent effects on perceived health. Facial adiposity changes over a medium time course and is associated with both perceived and actual health. Skin colour alters over a short time and has strong effects on perceived health, yet links to health outcomes have barely been evaluated. Reviewing suggested an additional influence of demeanour as a perceptual cue to health. We, therefore, investigated the association of health judgements with multiple facial cues measured objectively from two-dimensional and three-dimensional facial images. We found evidence for independent contributions of face shape and skin colour cues to perceived health. Our empirical findings: (i) reinforce the role of skin yellowness; (ii) demonstrate the utility of global face shape measures of adiposity; and (iii) emphasize the role of affect in facial images with nominally neutral expression in impressions of health. PMID:27069057

  6. Face recognition using slow feature analysis and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  7. Proposal of Self-Learning and Recognition System of Facial Expression

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko

    We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.

  8. Pivotal Trial of the Efficacy and Safety of Oxymetazoline Cream 1.0% for the Treatment of Persistent Facial Erythema Associated With Rosacea: Findings from the Second REVEAL Trial.

    PubMed

    Baumann, Leslie; Goldberg, David J; Stein Gold, Linda; Tanghetti, Emil A; Lain, Edward; Kaufman, Joely; Weng, Emily; Berk, David R; Ahluwalia, Gurpreet

    2018-03-01

    Rosacea is a chronic dermatologic condition with limited treatment options, particularly for persistent erythema. This pivotal phase 3 study evaluated oxymetazoline, an a1A-adrenoceptor agonist, for the treatment of moderate to severe persistent erythema of rosacea. Eligible patients were randomly assigned 1:1 to receive oxymetazoline cream 1.0% or vehicle applied topically to the face once daily for 29 days. The primary efficacy outcome was ≥2-grade improvement from baseline on both Clinician Erythema Assessment (CEA) and Subject Self-Assessment for rosacea facial redness (SSA) (composite success) at 3, 6, 9, and 12 hours postdose on day 29. Digital image analysis of rosacea facial erythema was evaluated as a secondary efficacy outcome measure. Safety assessments included treatment-emergent adverse events (TEAEs) and dermal tolerability. Patients were followed for 28 days posttreatment to assess worsening of erythema (1-grade increase in severity from baseline on composite CEA/SSA in patients with moderate erythema at baseline; rebound effect). The study included 445 patients (mean age: 50.3 years; 78.7% female); most had moderate erythema at baseline (84.0% on CEA; 91.5% on SSA). The proportion of patients achieving the primary efficacy outcome was significantly greater with oxymetazoline versus vehicle (P=0.001). Similar results favoring oxymetazoline over vehicle were observed for the individual CEA and SSA scores (P less than 0.001 and P=0.011, respectively). Median reduction in rosacea facial erythema on day 29 as assessed by digital image analysis also favored oxymetazoline over vehicle (P less than 0.001). Safety results were similar between oxymetazoline and vehicle; discontinuations due to TEAEs were low (2.7% vs 0.5%). Following cessation of treatment, 2 (1.2%) patients in the oxymetazoline group and no patient in the vehicle group had rebound effect compared with their day 1 baseline score. Topical oxymetazoline applied to the face once daily for 29 days was effective, safe, and well tolerated in the treatment of moderate to severe persistent facial erythema of rosacea.

    J Drugs Dermatol. 2018;17(3):290-298.

    .

  9. Preoperative Identification of Facial Nerve in Vestibular Schwannomas Surgery Using Diffusion Tensor Tractography

    PubMed Central

    Choi, Kyung-Sik; Kim, Min-Su; Kwon, Hyeok-Gyu; Jang, Sung-Ho

    2014-01-01

    Objective Facial nerve palsy is a common complication of treatment for vestibular schwannoma (VS), so preserving facial nerve function is important. The preoperative visualization of the course of facial nerve in relation to VS could help prevent injury to the nerve during the surgery. In this study, we evaluate the accuracy of diffusion tensor tractography (DTT) for preoperative identification of facial nerve. Methods We prospectively collected data from 11 patients with VS, who underwent preoperative DTT for facial nerve. Imaging results were correlated with intraoperative findings. Postoperative DTT was performed at postoperative 3 month. Facial nerve function was clinically evaluated according to the House-Brackmann (HB) facial nerve grading system. Results Facial nerve courses on preoperative tractography were entirely correlated with intraoperative findings in all patients. Facial nerve was located on the anterior of the tumor surface in 5 cases, on anteroinferior in 3 cases, on anterosuperior in 2 cases, and on posteroinferior in 1 case. In postoperative facial nerve tractography, preservation of facial nerve was confirmed in all patients. No patient had severe facial paralysis at postoperative one year. Conclusion This study shows that DTT for preoperative identification of facial nerve in VS surgery could be a very accurate and useful radiological method and could help to improve facial nerve preservation. PMID:25289119

  10. Contribution of malocclusion and female facial attractiveness to smile esthetics evaluated by eye tracking.

    PubMed

    Richards, Michael R; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Walther, Dirk B; Rosenstiel, Stephen; Sacksteder, James M

    2015-04-01

    There is disagreement in the literature concerning the importance of the mouth in overall facial attractiveness. Eye tracking provides an objective method to evaluate what people see. The objective of this study was to determine whether dental and facial attractiveness alters viewers' visual attention in terms of which area of the face (eyes, nose, mouth, chin, ears, or other) is viewed first, viewed the greatest number of times, and viewed for the greatest total time (duration) using eye tracking. Seventy-six viewers underwent 1 eye tracking session. Of these, 53 were white (49% female, 51% male). Their ages ranged from 18 to 29 years, with a mean of 19.8 years, and none were dental professionals. After being positioned and calibrated, they were shown 24 unique female composite images, each image shown twice for reliability. These images reflected a repaired unilateral cleft lip or 3 grades of dental attractiveness similar to those of grades 1 (near ideal), 7 (borderline treatment need), and 10 (definite treatment need) as assessed in the aesthetic component of the Index of Orthodontic Treatment Need (AC-IOTN). The images were then embedded in faces of 3 levels of attractiveness: attractive, average, and unattractive. During viewing, data were collected for the first location, frequency, and duration of each viewer's gaze. Observer reliability ranged from 0.58 to 0.92 (intraclass correlation coefficients) but was less than 0.07 (interrater) for the chin, which was eliminated from the study. Likewise, reliability for the area of first fixation was kappa less than 0.10 for both intrarater and interrater reliabilities; the area of first fixation was also removed from the data analysis. Repeated-measures analysis of variance showed a significant effect (P <0.001) for level of attractiveness by malocclusion by area of the face. For both number of fixations and duration of fixations, the eyes overwhelmingly were most salient, with the mouth receiving the second most visual attention. At times, the mouth and the eyes were statistically indistinguishable in viewers' gazes of fixation and duration. As the dental attractiveness decreased, the visual attention increased on the mouth, approaching that of the eyes. AC-IOTN grade 10 gained the most attention, followed by both AC-IOTN grade 7 and the cleft. AC-IOTN grade 1 received the least amount of visual attention. Also, lower dental attractiveness (AC-IOTN 7 and AC-IOTN 10) received more visual attention as facial attractiveness increased. Eye tracking indicates that dental attractiveness can alter the level of visual attention depending on the female models' facial attractiveness when viewed by laypersons. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  11. Influence of Objective Three-Dimensional Measures and Movement Images on Surgeon Treatment Planning for Lip Revision Surgery

    PubMed Central

    Trotman, Carroll-Ann; Phillips, Ceib; Faraway, Julian J.; Hartman, Terry; van Aalst, John A.

    2013-01-01

    Objective To determine whether a systematic evaluation of facial soft tissues of patients with cleft lip and palate, using facial video images and objective three-dimensional measurements of movement, change surgeons’ treatment plans for lip revision surgery. Design Prospective longitudinal study. Setting The University of North Carolina School of Dentistry. Patients, Participants A group of patients with repaired cleft lip and palate (n = 21), a noncleft control group (n = 37), and surgeons experienced in cleft care. Interventions Lip revision. Main Outcome Measures (1) facial photographic images; (2) facial video images during animations; (3) objective three-dimensional measurements of upper lip movement based on z scores; and (4) objective dynamic and visual three-dimensional measurement of facial soft tissue movement. Results With the use of the video images plus objective three-dimensional measures, changes were made to the problem list of the surgical treatment plan for 86% of the patients (95% confidence interval, 0.64 to 0.97) and the surgical goals for 71% of the patients (95% confidence interval, 0.48 to 0.89). The surgeon group varied in the percentage of patients for whom the problem list was modified, ranging from 24% (95% confidence interval, 8% to 47%) to 48% (95% confidence interval, 26% to 70%) of patients, and the percentage for whom the surgical goals were modified, ranging from 14% (94% confidence interval, 3% to 36%) to 48% (95% confidence interval, 26% to 70%) of patients. Conclusions For all surgeons, the additional assessment components of the systematic valuation resulted in a change in clinical decision making for some patients. PMID:23855676

  12. Facial dysmorphism in Leigh syndrome with SURF-1 mutation and COX deficiency.

    PubMed

    Yüksel, Adnan; Seven, Mehmet; Cetincelik, Umran; Yeşil, Gözde; Köksal, Vedat

    2006-06-01

    Leigh syndrome is an inherited, progressive neurodegenerative disorder of infancy and childhood. Mutations in the nuclear SURF-1 gene are specifically associated with cytochrome C oxidase-deficient Leigh syndrome. This report describes two patients with similar facial features. One of them was a 2(1/2)-year-old male, and the other was a 3-year-old male with a mutation in SURF-1 gene and facial dysmorphism including frontal bossing, brachycephaly, hypertrichosis, lateral displacement of inner canthi, esotropia, maxillary hypoplasia, hypertrophic gums, irregularly placed teeth, upturned nostril, low-set big ears, and retrognathi. The first patient's magnetic resonance imaging at 15 months of age indicated mild symmetric T2 prolongation involving the subthalamic nuclei. His second magnetic resonance imaging at 2 years old revealed a symmetric T2 prolongation involving the subthalamic nuclei, substantia nigra, and medulla lesions. In the second child, at the age of 2 the first magnetic resonance imaging documented heavy brainstem and subthalamic nuclei involvement. A second magnetic resonance imaging, performed when he was 3 years old, revealed diffuse involvement of the substantia nigra and hyperintense lesions of the central tegmental tract in addition to previous lesions. Facial dysmorphism and magnetic resonance imaging findings, observed in these cases, can be specific findings in Leigh syndrome patients with cytochrome C oxidase deficiency. SURF-1 gene mutations must be particularly reviewed in such patients.

  13. Juvenile myelomonocytic leukemia presenting with facial nerve paresis: a unique presentation.

    PubMed

    Smith, Lorie B; Valdes, Yamily; Check, William E; Britt, Peter M; Frankel, Lawrence S

    2007-11-01

    Juvenile myelomonocytic leukemia (JMML) is a distinct myeloproliferative malignancy of early childhood with a varied clinical presentation that may include failure to thrive, malaise, fever, bleeding, pallor, lymphadenopathy, and hepatosplenomegaly. Skin, pulmonary, and gastrointestinal involvement have also been reported. There are no reports of central nervous system (CNS) involvement at diagnosis of this disease. This is a report of a 21-month old boy who had a right facial paresis at presentation. A brain mass was demonstrated on magnetic resonance imaging and cerebrospinal fluid analysis confirmed CNS leukemic infiltration. We report the presence of CNS infiltration as a part of the natural course of JMML and provide a review of the literature.

  14. Electrical stimulation treatment for facial palsy after revision pleomorphic adenoma surgery

    PubMed Central

    Goldie, Simon; Sandeman, Jack; Cole, Richard; Dennis, Simon; Swain, Ian

    2016-01-01

    Surgery for pleomorphic adenoma recurrence presents a significant risk of facial nerve damage that can result in facial weakness effecting patients’ ability to communicate, mental health and self-image. We report two case studies that had marked facial weakness after resection of recurrent pleomorphic adenoma and their progress with electrical stimulation. Subjects received electrical stimulation twice daily for 24 weeks during which photographs of expressions, facial measurements and Sunnybrook scores were recorded. Both subjects recovered good facial function demonstrating Sunnybrook scores of 54 and 64 that improved to 88 and 96, respectively. Neither subjects demonstrated adverse effects of treatment. We conclude that electrical stimulation is a safe treatment and may improve facial palsy in patients after resection of recurrent pleomorphic adenoma. Larger studies would be difficult to pursue due to the low incidence of cases. PMID:27106613

  15. Computerized measurement of facial expression of emotions in schizophrenia.

    PubMed

    Alvino, Christopher; Kohler, Christian; Barrett, Frederick; Gur, Raquel E; Gur, Ruben C; Verma, Ragini

    2007-07-30

    Deficits in the ability to express emotions characterize several neuropsychiatric disorders and are a hallmark of schizophrenia, and there is need for a method of quantifying expression, which is currently done by clinical ratings. This paper presents the development and validation of a computational framework for quantifying emotional expression differences between patients with schizophrenia and healthy controls. Each face is modeled as a combination of elastic regions, and expression changes are modeled as a deformation between a neutral face and an expressive face. Functions of these deformations, known as the regional volumetric difference (RVD) functions, form distinctive quantitative profiles of expressions. Employing pattern classification techniques, we have designed expression classifiers for the four universal emotions of happiness, sadness, anger and fear by training on RVD functions of expression changes. The classifiers were cross-validated and then applied to facial expression images of patients with schizophrenia and healthy controls. The classification score for each image reflects the extent to which the expressed emotion matches the intended emotion. Group-wise statistical analysis revealed this score to be significantly different between healthy controls and patients, especially in the case of anger. This score correlated with clinical severity of flat affect. These results encourage the use of such deformation based expression quantification measures for research in clinical applications that require the automated measurement of facial affect.

  16. Motion Imagery Processing and Exploitation (MIPE)

    DTIC Science & Technology

    2013-01-01

    facial recognition —i.e., the identification of a specific person.37 Object detection is often (but not always) considered a prerequisite for instance...The goal of segmentation is to distinguish objects and identify boundaries in images. Some of the earliest approaches to facial recognition involved...methods of instance recognition are at varying levels of maturity. Facial recognition methods are arguably the most mature; the technology is well

  17. A forgotten facial nerve tumour: granular cell tumour of the parotid and its implications for treatment.

    PubMed

    Lerut, B; Vosbeck, J; Linder, T E

    2011-04-01

    We present a rare case of a facial nerve granular cell tumour in the right parotid gland, in a 10-year-old boy. A parotid or neurogenic tumour was suspected, based on magnetic resonance imaging. Intra-operatively, strong adhesions to surrounding structures were found, and a midfacial nerve branch had to be sacrificed for complete tumour removal. Recent reports verify that granular cell tumours arise from Schwann cells of peripheral nerve branches. The rarity of this tumour within the parotid gland, its origin from peripheral nerves, its sometimes misleading imaging characteristics, and its rare presentation with facial weakness and pain all have considerable implications on the surgical strategy and pre-operative counselling. Fine needle aspiration cytology may confirm the neurogenic origin of this lesion. When resecting the tumour, the surgeon must anticipate strong adherence to the facial nerve and be prepared to graft, or sacrifice, certain branches of this nerve.

  18. Low-Income, African American and American Indian Children's Viewpoints on Body Image Assessment Tools and Body Satisfaction: A Mixed Methods Study.

    PubMed

    Heidelberger, Lindsay; Smith, Chery

    2018-03-03

    Objectives Pediatric obesity is complicated by many factors including psychological issues, such as body dissatisfaction. Body image assessment tools are used with children to measure their acceptance of their body shape or image. Limited research has been conducted with African American and American Indian children to understand their opinions on assessment tools created. This study investigated: (a) children's perception about body image and (b) differences between two body image instruments among low-income, multi-ethnic children. Methods This study uses mixed methodology including focus groups (qualitative) and body image assessment instruments (quantitative). Fifty-one children participated (25 girls, 26 boys); 53% of children identified as African American and 47% as American Indian. The average age was 10.4 years. Open coding methods were used by identify themes from focus group data. SPSS was used for quantitative analysis. Results Children preferred the Figure Rating Scale (FRS/silhouette) instrument over the Children's Body Image Scale (CBIS/photo) because their body parts and facial features were more detailed. Children formed their body image perception with influence from their parents and the media. Children verbalized that they have experienced negative consequences related to poor body image including disordered eating habits, depression, and bullying. Healthy weight children are also aware of weight-related bullying that obese and overweight children face. Conclusions for Practice Children prefer that the images on a body image assessment tool have detailed facial features and are clothed. Further research into body image assessment tools for use with African American and American Indian children is needed.

  19. Three-dimensional Imaging Methods for Quantitative Analysis of Facial Soft Tissues and Skeletal Morphology in Patients with Orofacial Clefts: A Systematic Review

    PubMed Central

    Kuijpers, Mette A. R.; Chiu, Yu-Ting; Nada, Rania M.; Carels, Carine E. L.; Fudalej, Piotr S.

    2014-01-01

    Background Current guidelines for evaluating cleft palate treatments are mostly based on two-dimensional (2D) evaluation, but three-dimensional (3D) imaging methods to assess treatment outcome are steadily rising. Objective To identify 3D imaging methods for quantitative assessment of soft tissue and skeletal morphology in patients with cleft lip and palate. Data sources Literature was searched using PubMed (1948–2012), EMBASE (1980–2012), Scopus (2004–2012), Web of Science (1945–2012), and the Cochrane Library. The last search was performed September 30, 2012. Reference lists were hand searched for potentially eligible studies. There was no language restriction. Study selection We included publications using 3D imaging techniques to assess facial soft tissue or skeletal morphology in patients older than 5 years with a cleft lip with/or without cleft palate. We reviewed studies involving the facial region when at least 10 subjects in the sample size had at least one cleft type. Only primary publications were included. Data extraction Independent extraction of data and quality assessments were performed by two observers. Results Five hundred full text publications were retrieved, 144 met the inclusion criteria, with 63 high quality studies. There were differences in study designs, topics studied, patient characteristics, and success measurements; therefore, only a systematic review could be conducted. Main 3D-techniques that are used in cleft lip and palate patients are CT, CBCT, MRI, stereophotogrammetry, and laser surface scanning. These techniques are mainly used for soft tissue analysis, evaluation of bone grafting, and changes in the craniofacial skeleton. Digital dental casts are used to evaluate treatment and changes over time. Conclusion Available evidence implies that 3D imaging methods can be used for documentation of CLP patients. No data are available yet showing that 3D methods are more informative than conventional 2D methods. Further research is warranted to elucidate it. Systematic review registration International Prospective Register of Systematic Reviews, PROSPERO CRD42012002041 PMID:24710215

  20. Evaluation of appearance transfer and persistence in central face transplantation: a computer simulation analysis.

    PubMed

    Pomahac, Bohdan; Aflaki, Pejman; Nelson, Charles; Balas, Benjamin

    2010-05-01

    Partial facial allotransplantation is an emerging option in reconstruction of central facial defects, providing function and aesthetic appearance. Ethical debate partly stems from uncertainty surrounding identity aspects of the procedure. There is no objective evidence regarding the effect of donors' transplanted facial structures on appearance change of the recipients and its influence on facial recognition of donors and recipients. Full-face frontal view color photographs of 100 volunteers were taken at a distance of 150 cm with a digital camera (Nikon/DX80). Photographs were taken in front of a blue background, and with a neutral facial expression. Using image-editing software (Adobe-Photoshop-CS3), central facial transplantation was performed between participants. Twenty observers performed a familiar 'facial recognition task', to identify 40 post-transplant composite faces presented individually on the screen at a viewing distance of 60 cm, with an exposure time of 5s. Each composite face comprised of a familiar and an unfamiliar face to the observers. Trials were done with and without external facial features (head contour, hair and ears). Two variables were defined: 'Appearance Transfer' refers to transfer of donor's appearance to the recipient. 'Appearance Persistence' deals with the extent of recipient's appearance change post-transplantation. A t-test was run to determine if the rates of Appearance Transfer differed from Appearance Persistence. Average Appearance Transfer rate (2.6%) was significantly lower than Appearance Persistence rate (66%) (P<0.001), indicating that donor's appearance transfer to the recipient is negligible, whereas recipients will be identified the majority of the time. External facial features were important in facial recognition of recipients, evidenced by a significant rise in Appearance Persistence from 19% in the absence of external features to 66% when those features were present (P<0.01). This study may be helpful in the informed consent process of prospective recipients. It is beneficial for education of donors families and is expected to positively affect their decision to consent for facial tissue donation. Copyright (c) 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  1. Facial emotion processing in pediatric social anxiety disorder: Relevance of situational context.

    PubMed

    Schwab, Daniela; Schienle, Anne

    2017-08-01

    Social anxiety disorder (SAD) typically begins in childhood. Previous research has demonstrated that adult patients respond with elevated late positivity (LP) to negative facial expressions. In the present study on pediatric SAD, we investigated responses to negative facial expressions and the role of social context information. Fifteen children with SAD and 15 non-anxious controls were first presented with images of negative facial expressions with masked backgrounds. Following this, the complete images which included context information, were shown. The negative expressions were either a result of an emotion-relevant (e.g., social exclusion) or emotion-irrelevant elicitor (e.g., weight lifting). Relative to controls, the clinical group showed elevated parietal LP during face processing with and without context information. Both groups differed in their frontal LP depending on the type of context. In SAD patients, frontal LP was lower in emotion-relevant than emotion-irrelevant contexts. We conclude that SAD patients direct more automatic attention towards negative facial expressions (parietal effect) and are less capable in integrating affective context information (frontal effect). Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    PubMed

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  3. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits

    PubMed Central

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081

  4. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits.

    PubMed

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-07-02

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.

  5. High precision automated face localization in thermal images: oral cancer dataset as test case

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  6. Dose assessment of digital tomosynthesis in pediatric imaging

    NASA Astrophysics Data System (ADS)

    Gislason, Amber; Elbakri, Idris A.; Reed, Martin

    2009-02-01

    We investigated the potential for digital tomosynthesis (DT) to reduce pediatric x-ray dose while maintaining image quality. We utilized the DT feature (VolumeRadTM) on the GE DefiniumTM 8000 flat panel system installed in the Winnipeg Children's Hospital. Facial bones, cervical spine, thoracic spine, and knee of children aged 5, 10, and 15 years were represented by acrylic phantoms for DT dose measurements. Effective dose was estimated for DT and for corresponding digital radiography (DR) and computed tomography (CT) patient image sets. Anthropomorphic phantoms of selected body parts were imaged by DR, DT, and CT. Pediatric radiologists rated visualization of selected anatomic features in these images. Dose and image quality comparisons between DR, DT, and CT determined the usefulness of tomosynthesis for pediatric imaging. CT effective dose was highest; total DR effective dose was not always lowest - depending how many projections were in the DR image set. For the cervical spine, DT dose was close to and occasionally lower than DR dose. Expert radiologists rated visibility of the central facial complex in a skull phantom as better than DR and comparable to CT. Digital tomosynthesis has a significantly lower dose than CT. This study has demonstrated DT shows promise to replace CT for some facial bones and spinal diagnoses. Other clinical applications will be evaluated in the future.

  7. Classifying and Standardizing Panfacial Trauma With a New Bony Facial Trauma Score.

    PubMed

    Casale, Garrett G A; Fishero, Brian A; Park, Stephen S; Sochor, Mark; Heltzel, Sara B; Christophel, J Jared

    2017-01-01

    The practice of facial trauma surgery would benefit from a useful quantitative scale that measures the extent of injury. To develop a facial trauma scale that incorporates only reducible fractures and is able to be reliably communicated to health care professionals. A cadaveric tissue study was conducted from October 1 to 3, 2014. Ten cadaveric heads were subjected to various degrees of facial trauma by dropping a fixed mass onto each head. The heads were then imaged with fine-cut computed tomography. A Bony Facial Trauma Scale (BFTS) for grading facial trauma was developed based only on clinically relevant (reducible) fractures. The traumatized cadaveric heads were then scored using this scale as well as 3 existing scoring systems. Regression analysis was used to determine correlation between degree of incursion of the fixed mass on the cadaveric heads and trauma severity as rated by the scoring systems. Statistical analysis was performed to determine correlation of the scores obtained using the BFTS with those of the 3 existing scoring systems. Scores obtained using the BFTS were not correlated with dentition (95% CI, -0.087 to 1.053; P = .08; measured as absolute number of teeth) or age of the cadaveric donor (95% CI, -0.068 to 0.944; P = .08). Facial trauma scores. Among all 10 cadaveric specimens (9 male donors and 1 female donor; age range, 41-87 years; mean age, 57.2 years), the facial trauma scores obtained using the BFTS correlated with depth of penetration of the mass into the face (odds ratio, 4.071; 95% CI, 1.676-6.448) P = .007) when controlling for presence of dentition and age. The BFTS scores also correlated with scores obtained using 3 existing facial trauma models (Facial Fracture Severity Scale, rs = 0.920; Craniofacial Disruption Score, rs = 0.945; and ZS Score, rs = 0.902; P < .001 for all 3 models). In addition, the BFTS was found to have excellent interrater reliability (0.908; P = .001), which was similar to the interrater reliability of the other 3 tested trauma scales. Scores obtained using the BFTS were not correlated with dentition (odds ratio, .482; 95% CI, -0.087 to 1.053; P = .08; measured as absolute number of teeth) or age of the cadaveric donor (odds ratio, .436; 95% CI, -0.068 to 0.944; P = .08). Facial trauma severity as measured by the BFTS correlated with depth of penetration of the fixed mass into the face. In this study, the BFTS was clinically relevant, had high fidelity in communicating the fractures sustained in facial trauma, and correlated well with previously validated models. NA.

  8. Adaptation of facial synthesis to parameter analysis in MPEG-4 visual communication

    NASA Astrophysics Data System (ADS)

    Yu, Lu; Zhang, Jingyu; Liu, Yunhai

    2000-12-01

    In MPEG-4, Facial Definition Parameters (FDPs) and Facial Animation Parameters (FAPs) are defined to animate 1 a facial object. Most of the previous facial animation reconstruction systems were focused on synthesizing animation from manually or automatically generated FAPs but not the FAPs extracted from natural video scene. In this paper, an analysis-synthesis MPEG-4 visual communication system is established, in which facial animation is reconstructed from FAPs extracted from natural video scene.

  9. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women.

    PubMed

    Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects' Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability.

  10. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women

    PubMed Central

    Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p < 0.005 uncorrected. Furthermore, the level of the test subjects’ Interest-In-Infants was positively associated with the intensity of right fusiform gyrus response to infant faces and uncomfortable infant faces compared to sad adult faces. In addition, the Perspective Taking subscale score on the Interpersonal Reactivity Index-Chinese was significantly correlated with precuneus activity during uncomfortable infant faces compared to sad adult faces. Our findings suggest that regional brain areas may bias cognitive and emotional responses to infant facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability. PMID:27977692

  11. A 3-dimensional anthropometric evaluation of facial morphology among Chinese and Greek population.

    PubMed

    Liu, Yun; Kau, Chung How; Pan, Feng; Zhou, Hong; Zhang, Qiang; Zacharopoulos, Georgios Vasileiou

    2013-07-01

    The use of 3-dimensional (3D) facial imaging has taken greater importance as orthodontists use the soft tissue paradigm in the evaluation of skeletal disproportion. Studies have shown that faces defer in populations. To date, no anthropometric evaluations have been made of Chinese and Greek faces. The aim of this study was to compare facial morphologies of Greeks and Chinese using 3D facial anthropometric landmarks. Three-dimensional facial images were acquired via a commercially available stereophotogrammetric camera capture system. The 3dMD face system captured 245 subjects from 2 population groups (Chinese [n = 72] and Greek [n = 173]), and each population was categorized into male and female groups for evaluation. All subjects in the group were between 18 and 30 years old and had no apparent facial anomalies. Twenty-five anthropometric landmarks were identified on the 3D faces of each subject. Soft tissue nasion was set as the "zeroed" reference landmark. Twenty landmark distances were constructed and evaluated within 3 dimensions of space. Six angles, 4 proportions, and 1 construct were also calculated. Student t test was used to analyze each data set obtained within each subgroup. Distinct facial differences were noted between the subgroups evaluated. When comparing differences of sexes in 2 populations (eg, male Greeks and male Chinese), significant differences were noted in more than 80% of the landmark distances calculated. One hundred percent of the angular were significant, and the Chinese were broader in width to height facial proportions. In evaluating the lips to the esthetic line, the Chinese population had more protrusive lips. There are differences in the facial morphologies of subjects obtained from a Chinese population versus that of a Greek population.

  12. Obstructive Sleep Apnea in Women: Study of Speech and Craniofacial Characteristics

    PubMed Central

    Tyan, Marina; Fernández Pozo, Rubén; Toledano, Doroteo; Lopez Gonzalo, Eduardo; Alcazar Ramirez, Jose Daniel; Hernandez Gomez, Luis Alfonso

    2017-01-01

    Background Obstructive sleep apnea (OSA) is a common sleep disorder characterized by frequent cessation of breathing lasting 10 seconds or longer. The diagnosis of OSA is performed through an expensive procedure, which requires an overnight stay at the hospital. This has led to several proposals based on the analysis of patients’ facial images and speech recordings as an attempt to develop simpler and cheaper methods to diagnose OSA. Objective The objective of this study was to analyze possible relationships between OSA and speech and facial features on a female population and whether these possible connections may be affected by the specific clinical characteristics in OSA population and, more specifically, to explore how the connection between OSA and speech and facial features can be affected by gender. Methods All the subjects are Spanish subjects suspected to suffer from OSA and referred to a sleep disorders unit. Voice recordings and photographs were collected in a supervised but not highly controlled way, trying to test a scenario close to a realistic clinical practice scenario where OSA is assessed using an app running on a mobile device. Furthermore, clinical variables such as weight, height, age, and cervical perimeter, which are usually reported as predictors of OSA, were also gathered. Acoustic analysis is centered in sustained vowels. Facial analysis consists of a set of local craniofacial features related to OSA, which were extracted from images after detecting facial landmarks by using the active appearance models. To study the probable OSA connection with speech and craniofacial features, correlations among apnea-hypopnea index (AHI), clinical variables, and acoustic and facial measurements were analyzed. Results The results obtained for female population indicate mainly weak correlations (r values between .20 and .39). Correlations between AHI, clinical variables, and speech features show the prevalence of formant frequencies over bandwidths, with F2/i/ being the most appropriate formant frequency for OSA prediction in women. Results obtained for male population indicate mainly very weak correlations (r values between .01 and .19). In this case, bandwidths prevail over formant frequencies. Correlations between AHI, clinical variables, and craniofacial measurements are very weak. Conclusions In accordance with previous studies, some clinical variables are found to be good predictors of OSA. Besides, strong correlations are found between AHI and some clinical variables with speech and facial features. Regarding speech feature, the results show the prevalence of formant frequency F2/i/ over the rest of features for the female population as OSA predictive feature. Although the correlation reported is weak, this study aims to find some traces that could explain the possible connection between OSA and speech in women. In the case of craniofacial measurements, results evidence that some features that can be used for predicting OSA in male patients are not suitable for testing female population. PMID:29109068

  13. Facial anthropometric measurements in Iranian male workers using Digimizer version 4.1.1.0 image analysis software: a pilot study.

    PubMed

    Salvarzi, Elham; Choobineh, Alireza; Jahangiri, Mehdi; Keshavarzi, Sareh

    2018-02-26

    Craniometry is a subset of anthropometry, which measures the anatomical sizes of the head and face (craniofacial indicators). These dimensions are used in designing devices applied in the facial area, including respirators. This study was conducted to measure craniofacial dimensions of Iranian male workers required for face protective equipment design. In this study, facial anthropometric dimensions of 50 randomly selected Iranian male workers were measured by photographic method and Digimizer version 4.1.1.0. Ten facial dimensions were extracted from photographs and measured by Digimizer version 4.1.1.0. Mean, standard deviation and 5th, 50th and 95th percentiles for each dimension were determined and the relevant data bank was established. The anthropometric data bank for the 10 dimensions required for respirator design was provided for the target group with photo-anthropometric methods. The results showed that Iranian face dimensions were different from those of other nations and ethnicities. In this pilot study, anthropometric dimensions required for half-mask respirator design for Iranian male workers were measured by Digimizer version 4.1.1.0. The obtained anthropometric tables could be useful for the design of personal face protective equipment.

  14. An analysis of maxillary anterior teeth: facial and dental proportions.

    PubMed

    Hasanreisoglu, Ufuk; Berksun, Semih; Aras, Kerem; Arslan, Ilker

    2005-12-01

    The size and form of the maxillary anterior teeth are important in achieving pleasing dental and facial esthetics. However, little scientific data have been defined as criteria for evaluating these morphological features. This study analyzed the clinical crown dimensions of maxillary anterior teeth to determine whether consistent relationships exist between tooth width and several facial measurements in a subset of the Turkish population. Full-face and anterior tooth images of 100 Turkish dental students viewed from the front and engaged in maximum smiling were recorded with digital photography under standardized conditions. Gypsum casts of the maxillary arches of the subjects were also made. The dimensions of the anterior teeth, the occurrence of the golden ratio, the difference between the actual and perceived sizes, and the relationship between the anterior teeth and several facial measurements by gender were analyzed using the information obtained from both the computer images and the casts. One-sample, 2-sample, and paired t tests, and repeated-measures analysis of variance and Duncan multiple-range tests were performed to analyze the data (alpha=.05). The dimensions of the central incisors (P<.05) and canines (P<.01) varied by gender. The existence of the so-called "golden proportion" for the maxillary anterior teeth as a whole was not found. Significant differences emerged when the mean ratios between various perceived widths were compared with their ideal golden ratios (P<.01). Proportional relationships between the bizygomatic width and the width of the central incisor, and the intercanine distance and the interalar width in women were observed. The maxillary central incisor and canine dimensions of men were greater than those of women in the Turkish population studied, with the canines showing the greatest gender variation. Neither a golden proportion nor any other recurrent proportion for all anterior teeth was determined. Bizygomatic width and interalar width may serve as references for establishing the ideal width of the maxillary anterior teeth, particularly in women.

  15. Development of facial aging simulation system combined with three-dimensional shape prediction from facial photographs

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi; Matsuzaki, Kazutoshi; Taniguchi, Kei; Ogawa, Yoshinori; Imaizumi, Kazuhiko

    2017-03-01

    3D Facial aging changes in more than 10 years of identical persons are being measured at National Research Institute of Police Science. We performed machine learning using such measured data as teacher data and have developed the system which convert input 2D face image into 3D face model and simulate aging. Here, we report about processing and accuracy of our system.

  16. Randomized clinical trial of facial acupuncture with or without body acupuncture for treatment of melasma.

    PubMed

    Rerksuppaphol, Lakkana; Charoenpong, Theekapun; Rerksuppaphol, Sanguansak

    2016-02-01

    To evaluate the efficacy of acupuncture treatments in treating facial melasma, contrasting treatments involving facial acupuncture with facial/body acupuncture. Women suffering with melasma were randomly assigned into: 1) facial acupuncture (n = 20); or 2) facial/body acupuncture (n = 21). Each group was given 2 sessions per week for 8 weeks. Melasma area and darkness of its pigmentation were assessed using digital images. 95.2% and 90% of participants in facial/body and facial acupuncture, respectively, had decreased melasma areas, with a mean reduction area being 2.6 cm(2) (95%CI 1.6-3.6 cm(2)) and 2.4 cm(2) (95%CI 1.6-3.3 cm(2)), respectively. 66.7% (facial/body acupuncture) and 80.0% (facial acupuncture) of participants had lighter melasma pigmentation compared to their baselines (p-value = 0.482). Facial acupuncture, with or without body acupuncture, was shown to be effective in decreasing the size of melasma areas. This study is registered with the Thai Clinical Trial Registry (TCTR20140903004). Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Electrical stimulation treatment for facial palsy after revision pleomorphic adenoma surgery.

    PubMed

    Goldie, Simon; Sandeman, Jack; Cole, Richard; Dennis, Simon; Swain, Ian

    2016-04-22

    Surgery for pleomorphic adenoma recurrence presents a significant risk of facial nerve damage that can result in facial weakness effecting patients' ability to communicate, mental health and self-image. We report two case studies that had marked facial weakness after resection of recurrent pleomorphic adenoma and their progress with electrical stimulation. Subjects received electrical stimulation twice daily for 24 weeks during which photographs of expressions, facial measurements and Sunnybrook scores were recorded. Both subjects recovered good facial function demonstrating Sunnybrook scores of 54 and 64 that improved to 88 and 96, respectively. Neither subjects demonstrated adverse effects of treatment. We conclude that electrical stimulation is a safe treatment and may improve facial palsy in patients after resection of recurrent pleomorphic adenoma. Larger studies would be difficult to pursue due to the low incidence of cases. Published by Oxford University Press and JSCR Publishing Ltd. All rights reserved. © The Author 2016.

  18. Reaction of facial soft tissues to treatment with a Herbst appliance.

    PubMed

    Meyer-Marcotty, P; Kochel, J; Richter, U; Richter, F; Stellzig-Eisenhauer, Angelika

    2012-04-01

    The objective of this prospective longitudinal study was to investigate the reaction of facial soft tissues to treatment with a Herbst appliance. We aimed to quantify three-dimensionally (3D) the isolated effect of the Herbst appliance and volume changes in the lip profile. The 3D data of the facial soft tissues of 34 patients with skeletal Class II (17 female and 17 male, mean age 13.5 ± 1.8 years) were prepared in a standardized manner immediately before (T1) and after (T2) treatment with a Herbst appliance. Anthropometric evaluation was carried out in sagittal and vertical dimensions. To quantify volume changes, pretherapeutic and posttherapeutic images were superimposed three-dimensionally and the difference volumes calculated. Following testing for normal distribution, a statistical analysis was carried out using the paired t test. We observed ventral development of the soft tissues of the lower jaw with flattening of the profile curvature and anterior displacement of the sublabial region in a total of 27 patients. Anterior facial height was lengthened and the facial depth at the lower jaw increased. The largest percentage changes were noted in the lip profile, with a reduction in the red margin of the upper lip and an increase in lower lip height. We also observed a reduction of the sublabial fold in conjunction with a simultaneous increase in volume. The influence of the Herbst appliance on the facial soft tissues is expected to result in a positive treatment outcome, particularly in patients with a convex profile, a retrusive lower lip, and a marked sublabial fold. We observed a broad clinical spectrum of individual reactions in the facial soft tissues. It is, thus, not possible to detect a linear relationship between the Herbst treatment and soft tissue changes, making soft tissue changes difficult to predict.

  19. Promising Technique for Facial Nerve Reconstruction in Extended Parotidectomy.

    PubMed

    Villarreal, Ithzel Maria; Rodríguez-Valiente, Antonio; Castelló, Jose Ramon; Górriz, Carmen; Montero, Oscar Alvarez; García-Berrocal, Jose Ramon

    2015-11-01

    Malignant tumors of the parotid gland account scarcely for 5% of all head and neck tumors. Most of these neoplasms have a high tendency for recurrence, local infiltration, perineural extension, and metastasis. Although uncommon, these malignant tumors require complex surgical treatment sometimes involving a total parotidectomy including a complete facial nerve resection. Severe functional and aesthetic facial defects are the result of a complete sacrifice or injury to isolated branches becoming an uncomfortable distress for patients and a major challenge for reconstructive surgeons. A case of a 54-year-old, systemically healthy male patient with a 4 month complaint of pain and swelling on the right side of the face is presented. The patient reported a rapid increase in the size of the lesion over the past 2 months. Imaging tests and histopathological analysis reported an adenoid cystic carcinoma. A complete parotidectomy was carried out with an intraoperative notice of facial nerve infiltration requiring a second intervention for nerve and defect reconstruction. A free ALT flap with vascularized nerve grafts was the surgical choice. A 6 month follow-up showed partial facial movement recovery and the facial defect mended. It is of critical importance to restore function to patients with facial nerve injury. Vascularized nerve grafts, in many clinical and experimental studies, have shown to result in better nerve regeneration than conventional non-vascularized nerve grafts. Nevertheless, there are factors that may affect the degree, speed and regeneration rate regarding the free fasciocutaneous flap. In complex head and neck defects following a total parotidectomy, the extended free fasciocutaneous ALT (anterior-lateral thigh) flap with a vascularized nerve graft is ideally suited for the reconstruction of the injured site. Donor-site morbidity is low and additional surgical time is minimal compared with the time of a single ALT flap transfer.

  20. A novel computer system for the evaluation of nasolabial morphology, symmetry and aesthetics after cleft lip and palate treatment. Part 1: General concept and validation.

    PubMed

    Pietruski, Piotr; Majak, Marcin; Debski, Tomasz; Antoszewski, Boguslaw

    2017-04-01

    The need for a widely accepted method suitable for a multicentre quantitative evaluation of facial aesthetics after surgical treatment of cleft lip and palate (CLP) has been emphasized for years. The aim of this study was to validate a novel computer system 'Analyse It Doc' (A.I.D.) as a tool for objective anthropometric analysis of the nasolabial region. An indirect anthropometric analysis of facial photographs was conducted with the A.I.D. system and Adobe Photoshop/ImageJ software. Intra-rater and inter-rater reliability and the time required for the analysis were estimated separately for each method and compared. Analysis with A.I.D. system was nearly 10-fold faster than that with the reference evaluation method. The A.I.D. system provided strong inter-rater and intra-rater correlations for linear, angular and area measurements of the nasolabial region, as well as a significantly higher accuracy and reproducibility of angular measurements in submental view. No statistically significant inter-method differences were found for other measurements. The hereby presented novel computer system is suitable for simple, time-efficient and reliable multicenter photogrammetric analyses of the nasolabial region in CLP patients and healthy subjects. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. ALE meta-analysis on facial judgments of trustworthiness and attractiveness.

    PubMed

    Bzdok, D; Langner, R; Caspers, S; Kurth, F; Habel, U; Zilles, K; Laird, A; Eickhoff, Simon B

    2011-01-01

    Faces convey a multitude of information in social interaction, among which are trustworthiness and attractiveness. Humans process and evaluate these two dimensions very quickly due to their great adaptive importance. Trustworthiness evaluation is crucial for modulating behavior toward strangers; attractiveness evaluation is a crucial factor for mate selection, possibly providing cues for reproductive success. As both dimensions rapidly guide social behavior, this study tests the hypothesis that both judgments may be subserved by overlapping brain networks. To this end, we conducted an activation likelihood estimation meta-analysis on 16 functional magnetic resonance imaging studies pertaining to facial judgments of trustworthiness and attractiveness. Throughout combined, individual, and conjunction analyses on those two facial judgments, we observed consistent maxima in the amygdala which corroborates our initial hypothesis. This finding supports the contemporary paradigm shift extending the amygdala's role from dominantly processing negative emotional stimuli to processing socially relevant ones. We speculate that the amygdala filters sensory information with evolutionarily conserved relevance. Our data suggest that such a role includes not only "fight-or-flight" decisions but also social behaviors with longer term pay-off schedules, e.g., trustworthiness and attractiveness evaluation. © Springer-Verlag 2010

  2. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  3. Colour homogeneity and visual perception of age, health and attractiveness of male facial skin.

    PubMed

    Fink, B; Matts, P J; D'Emiliano, D; Bunse, L; Weege, B; Röder, S

    2012-12-01

    Visible facial skin condition in females is known to affect perception of age, health and attractiveness. Skin colour distribution in shape- and topography-standardized female faces, driven by localized melanin and haemoglobin, can account for up to twenty years of apparent age perception. Although this is corroborated by an ability to discern female age even in isolated, non-contextual skin images, a similar effect in the perception of male skin is yet to be demonstrated. To investigate the effect of skin colour homogeneity and chromophore distribution on the visual perception of age, health and attractiveness of male facial skin. Cropped images from the cheeks of facial images of 160 Caucasian British men aged 10-70 years were blind-rated for age, health and attractiveness by a total of 308 participants. In addition, the homogeneity of skin images and corresponding eumelanin/oxyhaemoglobin concentration maps were analysed objectively using Haralick's image segmentation algorithm. Isolated skin images taken from the cheeks of younger males were judged as healthier and more attractive. Perception of age, health and attractiveness was strongly related to melanin and haemoglobin distribution, whereby more even distributions led to perception of younger age and greater health and attractiveness. The evenness of melanized features was a stronger cue for age perception, whereas haemoglobin distribution was associated more strongly with health and attractiveness perception. Male skin colour homogeneity, driven by melanin and haemoglobin distribution, influences perception of age, health and attractiveness. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.

  4. [Fabrication of 3-dimensional skull model with rapid prototyping technique and its primary application in repairing one case of cranio-maxillo-facial trauma].

    PubMed

    Xia, Delin; Gui, Lai; Zhang, Zhiyong; Lu, Changsheng; Niu, Feng; Jin, Ji; Liu, Xiaoqing

    2005-10-01

    To investigate the methods of establishing 3-dimensional skull model using electron beam CT (EBCT) data rapid prototyping technique, and to discuss its application in repairing cranio-maxillo-facial trauma. The data were obtained by EBCT continuous volumetric scanning with 1.0 mm slice at thickness. The data were transferred to work-station for 3-dimensional surface reconstruction by computer-aided design software and the images were saved as STL file. The data can be used to control a laser rapid-prototyping device (AFS-320QZ) to construct geometric model. The material for the model construction is a kind of laser-sensitive resin power, which will become a mass when scanned by laser beam. The design and simulation of operation can be done on the model. The image data were transferred to the device slice by slice. Thus a geometric model is constructed according to the image data by repeating this process. Preoperative analysis, surgery simulation and implant of bone defect could be done on this computer-aided manufactured 3D model. One case of cranio-maxillo-facial bone defect resulting from trauma was reconstructed with this method. The EBCT scanning showed that the defect area was 4 cm x 6 cm. The nose was flat and deviated to left. The 3-dimensional skull was reconstructed with EBCT data and rapid prototyping technique. The model can display the structure of 3-dimensional anatomy and their relationship. The prefabricated implant by 3-dimensional model was well-matched with defect. The deformities of flat and deviated nose were corrected. The clinical result was satisfactory after a follow-up of 17 months. The 3-dimensional model of skull can replicate the prototype of disease and play an important role in the diagnosis and simulation of operation for repairing cranio-maxillo-facial trauma.

  5. Emotional facial activation induced by unconsciously perceived dynamic facial expressions.

    PubMed

    Kaiser, Jakob; Davey, Graham C L; Parkhouse, Thomas; Meeres, Jennifer; Scott, Ryan B

    2016-12-01

    Do facial expressions of emotion influence us when not consciously perceived? Methods to investigate this question have typically relied on brief presentation of static images. In contrast, real facial expressions are dynamic and unfold over several seconds. Recent studies demonstrate that gaze contingent crowding (GCC) can block awareness of dynamic expressions while still inducing behavioural priming effects. The current experiment tested for the first time whether dynamic facial expressions presented using this method can induce unconscious facial activation. Videos of dynamic happy and angry expressions were presented outside participants' conscious awareness while EMG measurements captured activation of the zygomaticus major (active when smiling) and the corrugator supercilii (active when frowning). Forced-choice classification of expressions confirmed they were not consciously perceived, while EMG revealed significant differential activation of facial muscles consistent with the expressions presented. This successful demonstration opens new avenues for research examining the unconscious emotional influences of facial expressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Facial Structure Analysis Separates Autism Spectrum Disorders into Meaningful Clinical Subgroups

    ERIC Educational Resources Information Center

    Obafemi-Ajayi, Tayo; Miles, Judith H.; Takahashi, T. Nicole; Qi, Wenchuan; Aldridge, Kristina; Zhang, Minqi; Xin, Shi-Qing; He, Ying; Duan, Ye

    2015-01-01

    Varied cluster analysis were applied to facial surface measurements from 62 prepubertal boys with essential autism to determine whether facial morphology constitutes viable biomarker for delineation of discrete Autism Spectrum Disorders (ASD) subgroups. Earlier study indicated utility of facial morphology for autism subgrouping (Aldridge et al. in…

  7. Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine

    PubMed Central

    Zhao, Changbo; Li, Guo-zheng; Li, Fufeng; Wang, Zhi; Liu, Chang

    2014-01-01

    Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM). However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss). However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function. PMID:24967342

  8. Three-dimensional evaluation of the relationship between jaw divergence and facial soft tissue dimensions.

    PubMed

    Rongo, Roberto; Antoun, Joseph Saswat; Lim, Yi Xin; Dias, George; Valletta, Rosa; Farella, Mauro

    2014-09-01

    To evaluate the relationship between mandibular divergence and vertical and transverse dimensions of the face. A sample was recruited from the orthodontic clinic of the University of Otago, New Zealand. The recruited participants (N  =  60) were assigned to three different groups based on the mandibular plane angle (hyperdivergent, n  =  20; normodivergent, n  =  20; and hypodivergent, n  =  20). The sample consisted of 31 females and 29 males, with a mean age of 21.1 years (SD ± 5.0). Facial scans were recorded for each participant using a three-dimensional (3D) white-light scanner and then merged to form a single 3D image of the face. Vertical and transverse measurements of the face were assessed from the 3D facial image. The hyperdivergent sample had a significantly larger total and lower anterior facial height than the other two groups (P < .05), although no difference was found for the middle facial height (P > .05). Similarly, there were no significant differences in the transverse measurements of the three study groups (P > .05). Both gender and body mass index (BMI) had a greater influence on the transverse dimension. Hyperdivergent facial types are associated with a long face but not necessarily a narrow face. Variations in facial soft tissue vertical and transversal dimensions are more likely to be due to gender. Body mass index has a role in mandibular width (GoGo) assessment.

  9. Use of 3-dimensional surface acquisition to study facial morphology in 5 populations.

    PubMed

    Kau, Chung How; Richmond, Stephen; Zhurov, Alexei; Ovsenik, Maja; Tawfik, Wael; Borbely, Peter; English, Jeryl D

    2010-04-01

    The aim of this study was to assess the use of 3-dimensional facial averages for determining morphologic differences from various population groups. We recruited 473 subjects from 5 populations. Three-dimensional images of the subjects were obtained in a reproducible and controlled environment with a commercially available stereo-photogrammetric camera capture system. Minolta VI-900 (Konica Minolta, Tokyo, Japan) and 3dMDface (3dMD LLC, Atlanta, Ga) systems were used. Each image was obtained as a facial mesh and orientated along a triangulated axis. All faces were overlaid, one on top of the other, and a complex mathematical algorithm was performed until average composite faces of 1 man and 1 woman were achieved for each subgroup. These average facial composites were superimposed based on a previously validated superimposition method, and the facial differences were quantified. Distinct facial differences were observed among the groups. The linear differences between surface shells ranged from 0.37 to 1.00 mm for the male groups. The linear differences ranged from 0.28 and 0.87 mm for the women. The color histograms showed that the similarities in facial shells between the subgroups by sex ranged from 26.70% to 70.39% for men and 36.09% to 79.83% for women. The average linear distance from the signed color histograms for the male subgroups ranged from -6.30 to 4.44 mm. The female subgroups ranged from -6.32 to 4.25 mm. Average faces can be efficiently and effectively created from a sample of 3-dimensional faces. Average faces can be used to compare differences in facial morphologies for various populations and sexes. Facial morphologic differences were greatest when totally different ethnic variations were compared. Facial morphologic similarities were present in comparable groups, but there were large variations in concentrated areas of the face. Copyright 2010 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  10. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    PubMed Central

    Mitrovic, Aleksandra; Goller, Jürgen

    2016-01-01

    We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. PMID:27698984

  11. Image Classification for Web Genre Identification

    DTIC Science & Technology

    2012-01-01

    recognition and landscape detection using the computer vision toolkit OpenCV1. For facial recognition , we researched the possibilities of using the...method for connecting these names with a face/personal photo and logo respectively. [2] METHODOLOGY For this project, we focused primarily on facial

  12. Facial paralysis

    MedlinePlus

    ... a physical, speech, or occupational therapist. If facial paralysis from Bell palsy lasts for more than 6 to 12 months, plastic surgery may be recommended to help the eye close and improve the appearance of the face. Alternative Names Paralysis of the face Images Ptosis, drooping of the ...

  13. Preliminary Analysis of the 3-Dimensional Morphology of the Upper Lip Configuration at the Completion of Facial Expressions in Healthy Japanese Young Adults and Patients With Cleft Lip.

    PubMed

    Matsumoto, Kouzou; Nozoe, Etsuro; Okawachi, Takako; Ishihata, Kiyohide; Nishinara, Kazuhide; Nakamura, Norifumi

    2016-09-01

    To develop criteria for the analysis of upper lip configuration of patients with cleft lip while they produce various facial expressions by comparing the 3-dimensional (3D) facial morphology of healthy Japanese adults and patients with cleft lip. Twenty healthy adult Japanese volunteers (10 men, 10 women, controls) without any observed facial abnormalities and 8 patients (4 men, 4 women) with unilateral cleft lip and palate who had undergone secondary lip and nose repair were recruited for this study. Facial expressions (resting, smiling, and blowing out a candle) were recorded with 2 Artec MHT 3D scanners, and images were superimposed by aligning the T-zone of the faces. The positions of 14 specific points were set on each face, and the positional changes of specific points and symmetry of the upper lip cross-section were analyzed. Furthermore, the configuration observed in healthy controls was compared with that in patients with cleft lip before and after surgery. The mean absolute values for T-zone overlap ranged from 0.04 to 0.15 mm. Positional changes of specific points in the controls showed that the nose and lip moved backward and laterally upward when smiling and the lips moved forward and downward medially when blowing out a candle; these movements were bilaterally symmetrical in men and women. In patients with cleft lip, the positional changes of the specific points were minor compared with those of the controls while smiling and blowing out a candle. The left-versus-right symmetry of the upper lip cross-section exceeded 1.0 mm in patients with cleft lip, which was markedly higher than that in the controls (0.17 to 0.91 mm). These left-versus-right differences during facial expressions were decreased after surgery. By comparing healthy individuals with patients with cleft lip, this study has laid the basis for determining control values for facial expressions. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Morphologic evaluation and classification of facial asymmetry using 3-dimensional computed tomography.

    PubMed

    Baek, Chaehwan; Paeng, Jun-Young; Lee, Janice S; Hong, Jongrak

    2012-05-01

    A systematic classification is needed for the diagnosis and surgical treatment of facial asymmetry. The purposes of this study were to analyze the skeletal structures of patients with facial asymmetry and to objectively classify these patients into groups according to these structural characteristics. Patients with facial asymmetry and recent computed tomographic images from 2005 through 2009 were included in this study, which was approved by the institutional review board. Linear measurements, angles, and reference planes on 3-dimensional computed tomograms were obtained, including maxillary (upper midline deviation, maxilla canting, and arch form discrepancy) and mandibular (menton deviation, gonion to midsagittal plane, ramus height, and frontal ramus inclination) measurements. All measurements were analyzed using paired t tests with Bonferroni correction followed by K-means cluster analysis using SPSS 13.0 to determine an objective classification of facial asymmetry in the enrolled patients. Kruskal-Wallis test was performed to verify differences among clustered groups. P < .05 was considered statistically significant. Forty-three patients (18 male, 25 female) were included in the study. They were classified into 4 groups based on cluster analysis. Their mean age was 24.3 ± 4.4 years. Group 1 included subjects (44% of patients) with asymmetry caused by a shift or lateralization of the mandibular body. Group 2 included subjects (39%) with a significant difference between the left and right ramus height with menton deviation to the short side. Group 3 included subjects (12%) with atypical asymmetry, including deviation of the menton to the short side, prominence of the angle/gonion on the larger side, and reverse maxillary canting. Group 4 included subjects (5%) with severe maxillary canting, ramus height differences, and menton deviation to the short side. In this study, patients with asymmetry were classified into 4 statistically distinct groups according to their anatomic features. This diagnostic classification method will assist in treatment planning for patients with facial asymmetry and may be used to explore the etiology of these variants of facial asymmetry. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Alexithymia and the labeling of facial emotions: response slowing and increased motor and somatosensory processing

    PubMed Central

    2014-01-01

    Background Alexithymia is a personality trait that is characterized by difficulties in identifying and describing feelings. Previous studies have shown that alexithymia is related to problems in recognizing others’ emotional facial expressions when these are presented with temporal constraints. These problems can be less severe when the expressions are visible for a relatively long time. Because the neural correlates of these recognition deficits are still relatively unexplored, we investigated the labeling of facial emotions and brain responses to facial emotions as a function of alexithymia. Results Forty-eight healthy participants had to label the emotional expression (angry, fearful, happy, or neutral) of faces presented for 1 or 3 seconds in a forced-choice format while undergoing functional magnetic resonance imaging. The participants’ level of alexithymia was assessed using self-report and interview. In light of the previous findings, we focused our analysis on the alexithymia component of difficulties in describing feelings. Difficulties describing feelings, as assessed by the interview, were associated with increased reaction times for negative (i.e., angry and fearful) faces, but not with labeling accuracy. Moreover, individuals with higher alexithymia showed increased brain activation in the somatosensory cortex and supplementary motor area (SMA) in response to angry and fearful faces. These cortical areas are known to be involved in the simulation of the bodily (motor and somatosensory) components of facial emotions. Conclusion The present data indicate that alexithymic individuals may use information related to bodily actions rather than affective states to understand the facial expressions of other persons. PMID:24629094

  16. I Can Stomach That! Fearlessness About Death Predicts Attenuated Facial Electromyography Activity in Response to Death-Related Images.

    PubMed

    Velkoff, Elizabeth A; Forrest, Lauren N; Dodd, Dorian R; Smith, April R

    2016-06-01

    Objective measures of suicide risk can convey life-saving information to clinicians, but few such measures exist. This study examined an objective measure of fearlessness about death (FAD), testing whether FAD relates to self-reported and physiological aversion to death. Females (n = 87) reported FAD and disgust sensitivity, and facial electromyography was used to measure physiological facial responses consistent with disgust while viewing death-related images. FAD predicted attenuated expression of physiological death aversion, even when controlling for self-reported death-related disgust sensitivity. Diminished physiological aversion to death-related stimuli holds promise as an objective measure of FAD and suicide risk. © 2015 The American Association of Suicidology.

  17. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  18. New Protocol for Skin Landmark Registration in Image-Guided Neurosurgery: Technical Note.

    PubMed

    Gerard, Ian J; Hall, Jeffery A; Mok, Kelvin; Collins, D Louis

    2015-09-01

    Newer versions of the commercial Medtronic StealthStation allow the use of only 8 landmark pairs for patient-to-image registration as opposed to 9 landmarks in older systems. The choice of which landmark pair to drop in these newer systems can have an effect on the quality of the patient-to-image registration. To investigate 4 landmark registration protocols based on 8 landmark pairs and compare the resulting registration accuracy with a 9-landmark protocol. Four different protocols were tested on both phantoms and patients. Two of the protocols involved using 4 ear landmarks and 4 facial landmarks and the other 2 involved using 3 ear landmarks and 5 facial landmarks. Both the fiducial registration error and target registration error were evaluated for each of the different protocols to determine any difference between them and the 9-landmark protocol. No difference in fiducial registration error was found between any of the 8-landmark protocols and the 9-landmark protocol. A significant decrease (P < .05) in target registration error was found when using a protocol based on 4 ear landmarks and 4 facial landmarks compared with the other protocols based on 3 ear landmarks. When using 8 landmarks to perform the patient-to-image registration, the protocol using 4 ear landmarks and 4 facial landmarks greatly outperformed the other 8-landmark protocols and 9-landmark protocol, resulting in the lowest target registration error.

  19. Extracranial Facial Nerve Schwannoma Treated by Hypo-fractionated CyberKnife Radiosurgery.

    PubMed

    Sasaki, Ayaka; Miyazaki, Shinichiro; Hori, Tomokatsu

    2016-09-21

    Facial nerve schwannoma is a rare intracranial tumor. Treatment for this benign tumor has been controversial. Here, we report a case of extracranial facial nerve schwannoma treated successfully by hypo-fractionated CyberKnife (Accuray, Sunnyvale, CA) radiosurgery and discuss the efficacy of this treatment. A 34-year-old female noticed a swelling in her right mastoid process. The lesion enlarged over a seven-month period, and she experienced facial spasm on the right side. She was diagnosed with a facial schwannoma via a magnetic resonance imaging (MRI) scan of the head and neck and was told to wait until the facial nerve palsy subsides. She was referred to our hospital for radiation therapy. We planned a fractionated CyberKnife radiosurgery for three consecutive days. After CyberKnife radiosurgery, the mass in the right parotid gradually decreased in size, and the facial nerve palsy disappeared. At her eight-month follow-up, her facial spasm had completely disappeared. There has been no recurrence and the facial nerve function has been normal. We successfully demonstrated the efficacy of CyberKnife radiosurgery as an alternative treatment that also preserves neurofunction for facial nerve schwannomas.

  20. In search of Leonardo: computer-based facial image analysis of Renaissance artworks for identifying Leonardo as subject

    NASA Astrophysics Data System (ADS)

    Tyler, Christopher W.; Smith, William A. P.; Stork, David G.

    2012-03-01

    One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, and various candidate images of younger men are difficult to assess given the absence of documentary evidence. One clue about Leonardo's appearance comes from the remark of the contemporary historian, Vasari, that the sculpture of David by Leonardo's master, Andrea del Verrocchio, was based on the appearance of Leonardo when he was an apprentice. Taking a cue from this statement, we suggest that the more mature sculpture of St. Thomas, also by Verrocchio, might also have been a portrait of Leonardo. We tested the possibility Leonardo was the subject for Verrocchio's sculpture by a novel computational technique for the comparison of three-dimensional facial configurations. Based on quantitative measures of similarities, we also assess whether another pair of candidate two-dimensional images are plausibly attributable as being portraits of Leonardo as a young adult. Our results are consistent with the claim Leonardo is indeed the subject in these works, but we need comparisons with images in a larger corpora of candidate artworks before our results achieve statistical significance.

  1. Common and distinct neural correlates of facial emotion processing in social anxiety disorder and Williams syndrome: A systematic review and voxel-based meta-analysis of functional resonance imaging studies.

    PubMed

    Binelli, C; Subirà, S; Batalla, A; Muñiz, A; Sugranyés, G; Crippa, J A; Farré, M; Pérez-Jurado, L; Martín-Santos, R

    2014-11-01

    Social Anxiety Disorder (SAD) and Williams-Beuren Syndrome (WS) are two conditions which seem to be at opposite ends in the continuum of social fear but show compromised abilities in some overlapping areas, including some social interactions, gaze contact and processing of facial emotional cues. The increase in the number of neuroimaging studies has greatly expanded our knowledge of the neural bases of facial emotion processing in both conditions. However, to date, SAD and WS have not been compared. We conducted a systematic review of functional magnetic resonance imaging (fMRI) studies comparing SAD and WS cases to healthy control participants (HC) using facial emotion processing paradigms. Two researchers conducted comprehensive PubMed/Medline searches to identify all fMRI studies of facial emotion processing in SAD and WS. The following search key-words were used: "emotion processing"; "facial emotion"; "social anxiety"; "social phobia"; "Williams syndrome"; "neuroimaging"; "functional magnetic resonance"; "fMRI" and their combinations, as well as terms specifying individual facial emotions. We extracted spatial coordinates from each study and conducted two separate voxel-wise activation likelihood estimation meta-analyses, one for SAD and one for WS. Twenty-two studies met the inclusion criteria: 17 studies of SAD and five of WS. We found evidence for both common and distinct patterns of neural activation. Limbic engagement was common to SAD and WS during facial emotion processing, although we observed opposite patterns of activation for each disorder. Compared to HC, SAD cases showed hyperactivation of the amygdala, the parahippocampal gyrus and the globus pallidus. Compared to controls, participants with WS showed hypoactivation of these regions. Differential activation in a number of regions specific to either condition was also identified: SAD cases exhibited greater activation of the insula, putamen, the superior temporal gyrus, medial frontal regions and the cuneus, while WS subjects showed decreased activation in the inferior region of the parietal lobule. The identification of limbic structures as a shared correlate and the patterns of activation observed for each condition may reflect the aberrant patterns of facial emotion processing that the two conditions share, and may contribute to explaining part of the underlying neural substrate of exaggerated/diminished fear responses to social cues that characterize SAD and WS respectively. We believe that insights from WS and the inclusion of this syndrome as a control group in future experimental studies may improve our understanding of the neural correlates of social fear in general, and of SAD in particular. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  4. Extra-facial melasma: clinical, histopathological, and immunohistochemical case-control study.

    PubMed

    Ritter, C G; Fiss, D V C; Borges da Costa, J A T; de Carvalho, R R; Bauermann, G; Cestari, T F

    2013-09-01

    Extra-facial melasma is a prevalent dermatosis in some populations with special characteristics in relation to its clinical aspects and probable etiopathogenic factors. Few studies have attempted to address this alteration of pigmentation, which has become a challenge in clinical Dermatology. To assess the clinical histopathological and immunohistochemical characteristics of extra-facial melasma, comparing affected, and unaffected sites. Case-control study with 45 patients in each group (melasma and disease-free volunteers), assessing their clinical characteristics. In 36 patients, biopsies were performed on the lesion and the normal perilesional skin. Specimens were stained with HE and Fontana-Masson, and melanocytes analysed by immunohistochemistry. Objective measurements were accomplished by a specifically designed image analysis software. The melasma group had a mean age ± SD of 56.67 ± 8 years, the majority of them were women (86.7%) and 82.1% of the female cases had reached menopause. There were no significant differences between groups in terms of presence of comorbidities, use of medications or hormone therapies. For extra-facial melasma patients, family history of this dermatose and of previous facial melasma was significantly higher than in the control group (P < 0.05). The HE staining showed increased rectification and basal hyperpigmentation, solar elastosis, and collagen degeneration in the pigmented area (P < 0.05). There was a significant increase in melanin density in melasma biopsies, but the immunohistochemical tests did not detect a difference between the groups in terms of number of melanocytes. Extra-facial melasma appears to be related to menopause, family history, and personal history of facial melasma, in the studied population. Histopathology revealed a pattern similar to what has been described for facial melasma, with signs of solar degeneration, and a similar number of melanocytes, when comparing patients, and controls, suggesting that the hyperpigmentation is most likely the result of abnormal melanin production or distribution. © 2012 The Authors. Journal of the European Academy of Dermatology and Venereology © 2012 European Academy of Dermatology and Venereology.

  5. 5-ALA induced fluorescent image analysis of actinic keratosis

    NASA Astrophysics Data System (ADS)

    Cho, Yong-Jin; Bae, Youngwoo; Choi, Eung-Ho; Jung, Byungjo

    2010-02-01

    In this study, we quantitatively analyzed 5-ALA induced fluorescent images of actinic keratosis using digital fluorescent color and hyperspectral imaging modalities. UV-A was utilized to induce fluorescent images and actinic keratosis (AK) lesions were demarcated from surrounding the normal region with different methods. Eight subjects with AK lesion were participated in this study. In the hyperspectral imaging modality, spectral analysis method was utilized for hyperspectral cube image and AK lesions were demarcated from the normal region. Before image acquisition, we designated biopsy position for histopathology of AK lesion and surrounding normal region. Erythema index (E.I.) values on both regions were calculated from the spectral cube data. Image analysis of subjects resulted in two different groups: the first group with the higher fluorescence signal and E.I. on AK lesion than the normal region; the second group with lower fluorescence signal and without big difference in E.I. between two regions. In fluorescent color image analysis of facial AK, E.I. images were calculated on both normal and AK lesions and compared with the results of hyperspectral imaging modality. The results might indicate that the different intensity of fluorescence and E.I. among the subjects with AK might be interpreted as different phases of morphological and metabolic changes of AK lesions.

  6. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  7. Hemifacial Spasm and Neurovascular Compression

    PubMed Central

    Lu, Alex Y.; Yeung, Jacky T.; Gerrard, Jason L.; Michaelides, Elias M.; Sekula, Raymond F.; Bulsara, Ketan R.

    2014-01-01

    Hemifacial spasm (HFS) is characterized by involuntary unilateral contractions of the muscles innervated by the ipsilateral facial nerve, usually starting around the eyes before progressing inferiorly to the cheek, mouth, and neck. Its prevalence is 9.8 per 100,000 persons with an average age of onset of 44 years. The accepted pathophysiology of HFS suggests that it is a disease process of the nerve root entry zone of the facial nerve. HFS can be divided into two types: primary and secondary. Primary HFS is triggered by vascular compression whereas secondary HFS comprises all other causes of facial nerve damage. Clinical examination and imaging modalities such as electromyography (EMG) and magnetic resonance imaging (MRI) are useful to differentiate HFS from other facial movement disorders and for intraoperative planning. The standard medical management for HFS is botulinum neurotoxin (BoNT) injections, which provides low-risk but limited symptomatic relief. The only curative treatment for HFS is microvascular decompression (MVD), a surgical intervention that provides lasting symptomatic relief by reducing compression of the facial nerve root. With a low rate of complications such as hearing loss, MVD remains the treatment of choice for HFS patients as intraoperative technique and monitoring continue to improve. PMID:25405219

  8. Neural responses to facial expressions support the role of the amygdala in processing threat

    PubMed Central

    Sormaz, Mladen; Flack, Tessa; Asghar, Aziz U. R.; Fan, Siyan; Frey, Julia; Manssuer, Luis; Usten, Deniz; Young, Andrew W.; Andrews, Timothy J.

    2014-01-01

    The amygdala is known to play an important role in the response to facial expressions that convey fear. However, it remains unclear whether the amygdala’s response to fear reflects its role in the interpretation of danger and threat, or whether it is to some extent activated by all facial expressions of emotion. Previous attempts to address this issue using neuroimaging have been confounded by differences in the use of control stimuli across studies. Here, we address this issue using a block design functional magnetic resonance imaging paradigm, in which we compared the response to face images posing expressions of fear, anger, happiness, disgust and sadness with a range of control conditions. The responses in the amygdala to different facial expressions were compared with the responses to a non-face condition (buildings), to mildly happy faces and to neutral faces. Results showed that only fear and anger elicited significantly greater responses compared with the control conditions involving faces. Overall, these findings are consistent with the role of the amygdala in processing threat, rather than in the processing of all facial expressions of emotion, and demonstrate the critical importance of the choice of comparison condition to the pattern of results. PMID:24097376

  9. Synthesis of Speaker Facial Movement to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.

    1994-01-01

    A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.

  10. Preferential responses in amygdala and insula during presentation of facial contempt and disgust.

    PubMed

    Sambataro, Fabio; Dimalta, Savino; Di Giorgio, Annabella; Taurisano, Paolo; Blasi, Giuseppe; Scarabino, Tommaso; Giannatempo, Giuseppe; Nardini, Marcello; Bertolino, Alessandro

    2006-10-01

    Some authors consider contempt to be a basic emotion while others consider it a variant of disgust. The neural correlates of contempt have not so far been specifically contrasted with disgust. Using functional magnetic resonance imaging (fMRI), we investigated the neural networks involved in the processing of facial contempt and disgust in 24 healthy subjects. Facial recognition of contempt was lower than that of disgust and of neutral faces. The imaging data indicated significant activity in the amygdala and in globus pallidus and putamen during processing of contemptuous faces. Bilateral insula and caudate nuclei and left as well as right inferior frontal gyrus were engaged during processing of disgusted faces. Moreover, direct comparisons of contempt vs. disgust yielded significantly different activations in the amygdala. On the other hand, disgusted faces elicited greater activation than contemptuous faces in the right insula and caudate. Our findings suggest preferential involvement of different neural substrates in the processing of facial emotional expressions of contempt and disgust.

  11. MPEG-4-based 2D facial animation for mobile devices

    NASA Astrophysics Data System (ADS)

    Riegel, Thomas B.

    2005-03-01

    The enormous spread of mobile computing devices (e.g. PDA, cellular phone, palmtop, etc.) emphasizes scalable applications, since users like to run their favorite programs on the terminal they operate at that moment. Therefore appliances are of interest, which can be adapted to the hardware realities without loosing a lot of their functionalities. A good example for this is "Facial Animation," which offers an interesting way to achieve such "scalability." By employing MPEG-4, which provides an own profile for facial animation, a solution for low power terminals including mobile phones is demonstrated. From the generic 3D MPEG-4 face a specific 2D head model is derived, which consists primarily of a portrait image superposed by a suited warping mesh and adapted 2D animation rules. Thus the animation process of MPEG-4 need not be changed and standard compliant facial animation parameters can be used to displace the vertices of the mesh and warp the underlying image accordingly.

  12. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  13. Three-dimensional photogrammetry for surgical planning of tissue expansion in hemifacial microsomia.

    PubMed

    Jayaratne, Yasas S N; Lo, John; Zwahlen, Roger A; Cheung, Lim K

    2010-12-01

    We aim to illustrate the applications of 3-dimensional (3-D) photogrammetry for surgical planning and longitudinal assessment of the volumetric changes in hemifacial microsomia. A 3-D photogrammetric system was employed for planning soft tissue expansion and transplantation of a vascularized scapular flap for a patient with hemifacial microsomia. The facial deficiency was calculated by superimposing a mirror of the normal side on the preoperative image. Postsurgical volumetric changes were monitored by serial superimposition of 3-D images. A total of 31 cm(3) of tissue expansion was achieved within a period of 4 weeks. A scapular free flap measuring 8 cm × 5 cm was transplanted to augment the facial deficiency. Postsurgical shrinkage of the flap was observed mainly in the first 3 months and it was minimal thereafter. 3-D photogrammetry can be used as a noninvasive objective tool for assessing facial deformity, planning, and postoperative follow-up of surgical correction of facial asymmetry.

  14. Three-dimensional facial recognition using passive long-wavelength infrared polarimetric imaging.

    PubMed

    Yuffa, Alex J; Gurton, Kristan P; Videen, Gorden

    2014-12-20

    We use a polarimetric camera to record the Stokes parameters and the degree of linear polarization of long-wavelength infrared radiation emitted by human faces. These Stokes images are combined with Fresnel relations to extract the surface normal at each pixel. Integrating over these surface normals yields a three-dimensional facial image. One major difficulty of this technique is that the normal vectors determined from the polarizations are not unique. We overcome this problem by introducing an additional boundary condition on the subject. The major sources of error in producing inversions are noise in the images caused by scattering of the background signal and the ambiguity in determining the surface normals from the Fresnel coefficients.

  15. In vivo observation of age-related structural changes of dermal collagen in human facial skin using collagen-sensitive second harmonic generation microscope equipped with 1250-nm mode-locked Cr:Forsterite laser

    NASA Astrophysics Data System (ADS)

    Yasui, Takeshi; Yonetsu, Makoto; Tanaka, Ryosuke; Tanaka, Yuji; Fukushima, Shu-ichiro; Yamashita, Toyonobu; Ogura, Yuki; Hirao, Tetsuji; Murota, Hiroyuki; Araki, Tsutomu

    2013-03-01

    In vivo visualization of human skin aging is demonstrated using a Cr:Forsterite (Cr:F) laser-based, collagen-sensitive second harmonic generation (SHG) microscope. The deep penetration into human skin, as well as the specific sensitivity to collagen molecules, achieved by this microscope enables us to clearly visualize age-related structural changes of collagen fiber in the reticular dermis. Here we investigated intrinsic aging and/or photoaging in the male facial skin. Young subjects show dense distributions of thin collagen fibers, whereas elderly subjects show coarse distributions of thick collagen fibers. Furthermore, a comparison of SHG images between young and elderly subjects with and without a recent life history of excessive sun exposure show that a combination of photoaging with intrinsic aging significantly accelerates skin aging. We also perform image analysis based on two-dimensional Fourier transformation of the SHG images and extracted an aging parameter for human skin. The in vivo collagen-sensitive SHG microscope will be a powerful tool in fields such as cosmeceutical sciences and anti-aging dermatology.

  16. Evaluation of facial attractiveness from end-of-treatment facial photographs.

    PubMed

    Shafiee, Roxanne; Korn, Edward L; Pearson, Helmer; Boyd, Robert L; Baumrind, Sheldon

    2008-04-01

    Orthodontists typically make judgments of facial attractiveness by examining groupings of profile, full-face, and smiling photographs considered together as a "triplet." The primary objective of this study was to determine the relative contributions of the 3 photographs-each considered separately-to the overall judgment a clinician forms by examining the combination of the 3. End-of-treatment triplet orthodontic photographs of 45 randomly selected orthodontic patients were duplicated. Copies of the profile, full-face, and smiling images were generated, and the images were separated and then pooled by image type for all subjects. Ten judges ranked the 45 photographs of each image type for facial attractiveness in groups of 9 to 12, from "most attractive" to "least attractive." Each judge also ranked the triplet groupings for the same 45 subjects. The mean attractiveness rankings for each type of photograph were then correlated with the mean rankings of each other and the triplets. The rankings of the 3 image types correlated highly with each other and the rankings of the triplets (P <.0001). The rankings of the smiling photographs were most predictive of the rankings of the triplets (r = 0.93); those of the profile photographs were the least predictive (r = 0.76). The difference between these correlations was highly statistically significant (P = .0003). It was also possible to test the extent to which the judges' rankings were influenced by sex, original Angle classification, and extraction status of each patient. No statistically significant preferences were found for sex or Angle classification, and only 1 marginally significant preference was found for extraction pattern. Clinician judges demonstrated a high level of agreement in ranking the facial attractiveness of profile, full-face, and smiling photographs of a group of orthodontically treated patients whose actual differences in physical dimensions were relatively small. The judges' rankings of the smiling photographs were significantly better predictors of their rankings of the triplet of each patient than were their rankings of the profile photographs.

  17. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    NASA Astrophysics Data System (ADS)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  18. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  19. An extensive analysis of various texture feature extractors to detect Diabetes Mellitus using facial specific regions.

    PubMed

    Shu, Ting; Zhang, Bob; Yan Tang, Yuan

    2017-04-01

    Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A new entity in the differential diagnosis of geniculate ganglion tumours: fibrous connective tissue lesion of the facial nerve.

    PubMed

    de Arriba, Alvaro; Lassaletta, Luis; Pérez-Mora, Rosa María; Gavilán, Javier

    2013-01-01

    Differential diagnosis of geniculate ganglion tumours includes chiefly schwannomas, haemangiomas and meningiomas. We report the case of a patient whose clinical and imaging findings mimicked the presentation of a facial nerve schwannoma.Pathological studies revealed a lesion with nerve bundles unstructured by intense collagenisation. Consequently, it was called fibrous connective tissue lesion of the facial nerve. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  1. A randomized, controlled comparative study of the wrinkle reduction benefits of a cosmetic niacinamide/peptide/retinyl propionate product regimen vs. a prescription 0.02% tretinoin product regimen.

    PubMed

    Fu, J J J; Hillebrand, G G; Raleigh, P; Li, J; Marmor, M J; Bertucci, V; Grimes, P E; Mandy, S H; Perez, M I; Weinkle, S H; Kaczvinsky, J R

    2010-03-01

    Tretinoin is considered the benchmark prescription topical therapy for improving fine facial wrinkles, but skin tolerance issues can affect patient compliance. In contrast, cosmetic antiwrinkle products are well tolerated but are generally presumed to be less efficacious than tretinoin. To compare the efficacy of a cosmetic moisturizer regimen vs. a prescription regimen with 0.02% tretinoin for improving the appearance of facial wrinkles. An 8-week, randomized, parallel-group study was conducted in 196 women with moderate to moderately severe periorbital wrinkles. Following 2 weeks washout, subjects on the cosmetic regimen (n = 99) used a sun protection factor (SPF) 30 moisturizing lotion containing 5% niacinamide, peptides and antioxidants, a moisturizing cream containing niacinamide and peptides, and a targeted wrinkle product containing niacinamide, peptides and 0.3% retinyl propionate. Subjects on the prescription regimen (n = 97) used 0.02% tretinoin plus moisturizing SPF 30 sunscreen. Subject cohorts (n = 25) continued treatment for an additional 16 weeks. Changes in facial wrinkling were assessed by both expert grading and image analysis of digital images of subjects' faces and by self-assessment questionnaire. Product tolerance was assessed via clinical erythema and dryness grading, subject self-assessment, and determinations of skin barrier integrity (transepidermal water loss) and stratum corneum protein changes. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin, with comparable benefits after 24 weeks. The cosmetic regimen was significantly better tolerated than tretinoin through 8 weeks by all measures. An appropriately designed cosmetic regimen can improve facial wrinkle appearance comparably with the benchmark prescription treatment, with improved tolerability.

  2. A randomized, controlled comparative study of the wrinkle reduction benefits of a cosmetic niacinamide/peptide/retinyl propionate product regimen vs. a prescription 0·02% tretinoin product regimen

    PubMed Central

    Fu, JJJ; Hillebrand, GG; Raleigh, P; Li, J; Marmor, MJ; Bertucci, V; Grimes, PE; Mandy, SH; Perez, MI; Weinkle, SH; Kaczvinsky, JR

    2010-01-01

    Background Tretinoin is considered the benchmark prescription topical therapy for improving fine facial wrinkles, but skin tolerance issues can affect patient compliance. In contrast, cosmetic antiwrinkle products are well tolerated but are generally presumed to be less efficacious than tretinoin. Objectives To compare the efficacy of a cosmetic moisturizer regimen vs. a prescription regimen with 0·02% tretinoin for improving the appearance of facial wrinkles. Methods An 8-week, randomized, parallel-group study was conducted in 196 women with moderate to moderately severe periorbital wrinkles. Following 2 weeks washout, subjects on the cosmetic regimen (n=99) used a sun protection factor (SPF) 30 moisturizing lotion containing 5% niacinamide, peptides and antioxidants, a moisturizing cream containing niacinamide and peptides, and a targeted wrinkle product containing niacinamide, peptides and 0·3% retinyl propionate. Subjects on the prescription regimen (n=97) used 0·02% tretinoin plus moisturizing SPF 30 sunscreen. Subject cohorts (n=25) continued treatment for an additional 16 weeks. Changes in facial wrinkling were assessed by both expert grading and image analysis of digital images of subjects’ faces and by self-assessment questionnaire. Product tolerance was assessed via clinical erythema and dryness grading, subject self-assessment, and determinations of skin barrier integrity (transepidermal water loss) and stratum corneum protein changes. Results The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin, with comparable benefits after 24 weeks. The cosmetic regimen was significantly better tolerated than tretinoin through 8 weeks by all measures. Conclusions An appropriately designed cosmetic regimen can improve facial wrinkle appearance comparably with the benchmark prescription treatment, with improved tolerability. PMID:20374604

  3. An equine pain face

    PubMed Central

    Gleerup, Karina B; Forkman, Björn; Lindegaard, Casper; Andersen, Pia H

    2015-01-01

    Objective The objective of this study was to investigate the existence of an equine pain face and to describe this in detail. Study design Semi-randomized, controlled, crossover trial. Animals Six adult horses. Methods Pain was induced with two noxious stimuli, a tourniquet on the antebrachium and topical application of capsaicin. All horses participated in two control trials and received both noxious stimuli twice, once with and once without an observer present. During all sessions their pain state was scored. The horses were filmed and the close-up video recordings of the faces were analysed for alterations in behaviour and facial expressions. Still images from the trials were evaluated for the presence of each of the specific pain face features identified from the video analysis. Results Both noxious challenges were effective in producing a pain response resulting in significantly increased pain scores. Alterations in facial expressions were observed in all horses during all noxious stimulations. The number of pain face features present on the still images from the noxious challenges were significantly higher than for the control trial (p = 0.0001). Facial expressions representative for control and pain trials were condensed into explanatory illustrations. During pain sessions with an observer present, the horses increased their contact-seeking behavior. Conclusions and clinical relevance An equine pain face comprising ‘low’ and/or ‘asymmetrical’ ears, an angled appearance of the eyes, a withdrawn and/or tense stare, mediolaterally dilated nostrils and tension of the lips, chin and certain facial muscles can be recognized in horses during induced acute pain. This description of an equine pain face may be useful for improving tools for pain recognition in horses with mild to moderate pain. PMID:25082060

  4. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    NASA Astrophysics Data System (ADS)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  5. Multiple Mechanisms in the Perception of Face Gender: Effect of Sex-Irrelevant Features

    ERIC Educational Resources Information Center

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-01-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes…

  6. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  7. Toward DNA-based facial composites: preliminary results and validation.

    PubMed

    Claes, Peter; Hill, Harold; Shriver, Mark D

    2014-11-01

    The potential of constructing useful DNA-based facial composites is forensically of great interest. Given the significant identity information coded in the human face these predictions could help investigations out of an impasse. Although, there is substantial evidence that much of the total variation in facial features is genetically mediated, the discovery of which genes and gene variants underlie normal facial variation has been hampered primarily by the multipartite nature of facial variation. Traditionally, such physical complexity is simplified by simple scalar measurements defined a priori, such as nose or mouth width or alternatively using dimensionality reduction techniques such as principal component analysis where each principal coordinate is then treated as a scalar trait. However, as shown in previous and related work, a more impartial and systematic approach to modeling facial morphology is available and can facilitate both the gene discovery steps, as we recently showed, and DNA-based facial composite construction, as we show here. We first use genomic ancestry and sex to create a base-face, which is simply an average sex and ancestry matched face. Subsequently, the effects of 24 individual SNPs that have been shown to have significant effects on facial variation are overlaid on the base-face forming the predicted-face in a process akin to a photomontage or image blending. We next evaluate the accuracy of predicted faces using cross-validation. Physical accuracy of the facial predictions either locally in particular parts of the face or in terms of overall similarity is mainly determined by sex and genomic ancestry. The SNP-effects maintain the physical accuracy while significantly increasing the distinctiveness of the facial predictions, which would be expected to reduce false positives in perceptual identification tasks. To the best of our knowledge this is the first effort at generating facial composites from DNA and the results are preliminary but certainly promising, especially considering the limited amount of genetic information about the face contained in these 24 SNPs. This approach can incorporate additional SNPs as these are discovered and their effects documented. In this context we discuss three main avenues of research: expanding our knowledge of the genetic architecture of facial morphology, improving the predictive modeling of facial morphology by exploring and incorporating alternative prediction models, and increasing the value of the results through the weighted encoding of physical measurements in terms of human perception of faces. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. [An individual facial shield for a sportsman with an orofacial injury].

    PubMed

    de Baat, C; Peters, R; van Iperen-Keiman, C M; de Vleeschouwer, M

    2005-05-01

    Facial shields are used when practising contact sports, high speed sports, sports using hard balls, sticks or bats, sports using protective shields or covers, and sports using hard boardings around the sports ground. Examples of facial shields are commercially available, per branch of sport standardised helmets. Fabricating individual protective shields is primarily restricted to mouth guards. In individual cases a more extensive facial shield is demanded, for instance in case of a surgically stabilised facial bone fracture. In order to be able to fabricate an extensive individual facial shield, an accurate to the nearest model of the anterior part of the head is required. An accurate model can be provided by making an impression of the face, which is poured in dental stone. Another method is producing a stereolithographic model using computertomography or magnetic resonance imaging. On the accurate model the facial shield can be designed and fabricated from a strictly safe material, such as polyvinylchloride or polycarbonate.

  9. The Perception of Dynamic and Static Facial Expressions of Happiness and Disgust Investigated by ERPs and fMRI Constrained Source Analysis

    PubMed Central

    Trautmann-Lengsfeld, Sina Alexa; Domínguez-Borràs, Judith; Escera, Carles; Herrmann, Manfred; Fehr, Thorsten

    2013-01-01

    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach. PMID:23818974

  10. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  11. Delayed presentation of traumatic facial nerve (CN VII) paralysis.

    PubMed

    Napoli, Anthony M; Panagos, Peter

    2005-11-01

    Facial nerve paralysis (Cranial Nerve VII, CN VII) can be a disfiguring disorder with profound impact upon the patient. The etiology of facial nerve paralysis may be congenital, iatrogenic, or result from neoplasm, infection, trauma, or toxic exposure. In the emergency department, the most common cause of unilateral facial paralysis is Bell's palsy, also known as idiopathic facial paralysis (IFP). We report a case of delayed presentation of unilateral facial nerve paralysis 3 days after sustaining a traumatic head injury. Re-evaluation and imaging of this patient revealed a full facial paralysis and temporal bone fracture extending into the facial canal. Because cranial nerve injuries occur in approximately 5-10% of head-injured patients, a good history and physical examination is important to differentiate IFP from another etiology. Newer generation high-resolution computed tomography (CT) scans are commonly demonstrating these fractures. An understanding of this complication, appropriate patient follow-up, and early involvement of the Otolaryngologist is important in management of these patients. The mechanism as well as the timing of facial nerve paralysis will determine the proper evaluation, consultation, and management for the patient. Patients with total or immediate paralysis as well as those with poorly prognostic audiogram results are good candidates for surgical repair.

  12. Small vestibular schwannomas presenting with facial nerve palsy.

    PubMed

    Espahbodi, Mana; Carlson, Matthew L; Fang, Te-Yung; Thompson, Reid C; Haynes, David S

    2014-06-01

    To describe the surgical management and convalescence of two patients presenting with severe facial nerve weakness associated with small intracanalicular vestibular schwannomas (VS). Retrospective review. Two adult female patients presenting with audiovestibular symptoms and subacute facial nerve paralysis (House-Brackmann Grade IV and V). In both cases, post-contrast T1-weighted magnetic resonance imaging revealed an enhancing lesion within the internal auditory canal without lateral extension beyond the fundus. Translabyrinthine exploration demonstrated vestibular nerve origin of tumor, extrinsic to the facial nerve, and frozen section pathology confirmed schwannoma. Gross total tumor resection with VIIth cranial nerve preservation and decompression of the labyrinthine segment of the facial nerve was performed. Both patients recovered full motor function between 6 and 8 months after surgery. Although rare, small VS may cause severe facial neuropathy, mimicking the presentation of facial nerve schwannomas and other less common pathologies. In the absence of labyrinthine extension on MRI, surgical exploration is the only reliable means of establishing a diagnosis. In the case of confirmed VS, early gross total resection with facial nerve preservation and labyrinthine segment decompression may afford full motor recovery-an outcome that cannot be achieved with facial nerve grafting.

  13. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  14. Clinical and Histological Evaluations of Enlarged Facial Skin Pores After Low Energy Level Treatments With Fractional Carbon Dioxide Laser in Korean Patients.

    PubMed

    Kwon, Hyuck Hoon; Choi, Sun Chul; Lee, Won-Yong; Jung, Jae Yoon; Park, Gyeong-Hun

    2018-03-01

    Enlarged facial pores can be an early manifestation of skin aging and they are a common aesthetic concern for Asians. However, studies of improving the appearance of enlarged pores have been limited. The authors aimed to study the application of CO2 fractional laser treatment in patients with enlarged facial pores. A total of 32 patients with dilated facial pores completed 3 consecutive sessions of low energy level treatments with a fractional CO2 laser at 4-week intervals. Image analysis was performed to calculate the number of enlarged pores before each treatment session and 12 weeks after the final treatment. After application of laser treatments, there was a significant decrease in the number of enlarged pores. The mean number of enlarged pores was decreased by 28.8% after the second session and by 54.5% at post-treatment evaluation. Post-treatment side effects were mild and transitory. Histological and immunohistochemical analyses demonstrated clear increases in the number of collagen fibers and the expression of transforming growth factor-β1. The short-term results showed that treatment with low energy level CO2 fractional laser therapy could be a safe and effective option for patients with Fitzpatrick skin Types III and IV who are concerned with enlarged pores.

  15. Perception of Age, Attractiveness, and Tiredness After Isolated and Combined Facial Subunit Aging.

    PubMed

    Forte, Antonio Jorge; Andrew, Tom W; Colasante, Cesar; Persing, John A

    2015-12-01

    Patients often seek help to redress aging that affects various regions of the face (subunits). The purpose of this study was to determine how aging of different facial subunits impacts perception of age, attractiveness, and tiredness. Frontal and lateral view facial photographs of a middle-aged woman were modified using imaging software to independently age different facial features. Sixty-six subjects were administered with a questionnaire, and presented with a baseline unmodified picture and others containing different individual or grouped aging of facial subunits. Test subjects were asked to estimate the age of the subject in the image and quantify (0-10 scale) how "tired" and "attractive" they appeared. Facial subunits were organized following rank assignment regarding impact on perception of age, attractiveness, and tiredness. The correlation coefficient between age and attractiveness had a strong inverse relationship of approximately -0.95 in both lateral and frontal views. From most to least impact in age, the rank assignment for frontal view facial subunits was full facial aging, middle third, lower third, upper third, vertical lip rhytides, horizontal forehead rhytides, jowls, upper eyelid ptosis, loss of malar volume, lower lid fat herniation, deepening glabellar furrows, and deepening nasolabial folds. From most to least impact in age, the rank assignment for lateral view facial subunits was severe neck ptosis, jowls, moderate neck ptosis, vertical lip rhytides, crow's feet, lower lid fat herniation, loss of malar volume, and elongated earlobe. This study provides a preliminary template for further research to determine which anatomical subunit will have the most substantial effect on an aged appearance, as well as on the perception of tiredness and attractiveness. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.

  16. Acromegaly determination using discriminant analysis of the three-dimensional facial classification in Taiwanese.

    PubMed

    Wang, Ming-Hsu; Lin, Jen-Der; Chang, Chen-Nen; Chiou, Wen-Ko

    2017-08-01

    The aim of this study was to assess the size, angles and positional characteristics of facial anthropometry between "acromegalic" patients and control subjects. We also identify possible facial soft tissue measurements for generating discriminant functions toward acromegaly determination in males and females for acromegaly early self-awareness. This is a cross-sectional study. Subjects participating in this study included 70 patients diagnosed with acromegaly (35 females and 35 males) and 140 gender-matched control individuals. Three-dimensional facial images were collected via a camera system. Thirteen landmarks were selected. Eleven measurements from the three categories were selected and applied, including five frontal widths, three lateral depths and three lateral angular measurements. Descriptive analyses were conducted using means and standard deviations for each measurement. Univariate and multivariate discriminant function analyses were applied in order to calculate the accuracy of acromegaly detection. Patients with acromegaly exhibit soft-tissue facial enlargement and hypertrophy. Frontal widths as well as lateral depth and angle of facial changes were evident. The average accuracies of all functions for female patient detection ranged from 80.0-91.40%. The average accuracies of all functions for male patient detection were from 81.0-94.30%. The greatest anomaly observed was evidenced in the lateral angles, with greater enlargement of "nasofrontal" angles for females and greater "mentolabial" angles for males. Additionally, shapes of the lateral angles showed changes. The majority of the facial measurements proved dynamic for acromegaly patients; however, it is problematic to detect the disease with progressive body anthropometric changes. The discriminant functions of detection developed in this study could help patients, their families, medical practitioners and others to identify and track progressive facial change patterns before the possible patients go to the hospital, especially the lateral "angles" which can be calculated by relative point-to-point changes derived from 2D lateral imagery without the 3D anthropometric measurements. This study tries to provide a novel and easy method to detect acromegaly when the patients start to have awareness of abnormal appearance because of facial measurement changes, and it also suggests that undiagnosed patients be urged to go to the hospital as soon as possible for acromegaly early diagnosis.

  17. A small-world network model of facial emotion recognition.

    PubMed

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  18. Obstructive Sleep Apnea in Women: Study of Speech and Craniofacial Characteristics.

    PubMed

    Tyan, Marina; Espinoza-Cuadros, Fernando; Fernández Pozo, Rubén; Toledano, Doroteo; Lopez Gonzalo, Eduardo; Alcazar Ramirez, Jose Daniel; Hernandez Gomez, Luis Alfonso

    2017-11-06

    Obstructive sleep apnea (OSA) is a common sleep disorder characterized by frequent cessation of breathing lasting 10 seconds or longer. The diagnosis of OSA is performed through an expensive procedure, which requires an overnight stay at the hospital. This has led to several proposals based on the analysis of patients' facial images and speech recordings as an attempt to develop simpler and cheaper methods to diagnose OSA. The objective of this study was to analyze possible relationships between OSA and speech and facial features on a female population and whether these possible connections may be affected by the specific clinical characteristics in OSA population and, more specifically, to explore how the connection between OSA and speech and facial features can be affected by gender. All the subjects are Spanish subjects suspected to suffer from OSA and referred to a sleep disorders unit. Voice recordings and photographs were collected in a supervised but not highly controlled way, trying to test a scenario close to a realistic clinical practice scenario where OSA is assessed using an app running on a mobile device. Furthermore, clinical variables such as weight, height, age, and cervical perimeter, which are usually reported as predictors of OSA, were also gathered. Acoustic analysis is centered in sustained vowels. Facial analysis consists of a set of local craniofacial features related to OSA, which were extracted from images after detecting facial landmarks by using the active appearance models. To study the probable OSA connection with speech and craniofacial features, correlations among apnea-hypopnea index (AHI), clinical variables, and acoustic and facial measurements were analyzed. The results obtained for female population indicate mainly weak correlations (r values between .20 and .39). Correlations between AHI, clinical variables, and speech features show the prevalence of formant frequencies over bandwidths, with F2/i/ being the most appropriate formant frequency for OSA prediction in women. Results obtained for male population indicate mainly very weak correlations (r values between .01 and .19). In this case, bandwidths prevail over formant frequencies. Correlations between AHI, clinical variables, and craniofacial measurements are very weak. In accordance with previous studies, some clinical variables are found to be good predictors of OSA. Besides, strong correlations are found between AHI and some clinical variables with speech and facial features. Regarding speech feature, the results show the prevalence of formant frequency F2/i/ over the rest of features for the female population as OSA predictive feature. Although the correlation reported is weak, this study aims to find some traces that could explain the possible connection between OSA and speech in women. In the case of craniofacial measurements, results evidence that some features that can be used for predicting OSA in male patients are not suitable for testing female population. ©Marina Tyan, Fernando Espinoza-Cuadros, Rubén Fernández Pozo, Doroteo Toledano, Eduardo Lopez Gonzalo, Jose Daniel Alcazar Ramirez, Luis Alfonso Hernandez Gomez. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 06.11.2017.

  19. A new atlas for the evaluation of facial features: advantages, limits, and applicability.

    PubMed

    Ritz-Timme, Stefanie; Gabriel, Peter; Obertovà, Zuzana; Boguslawski, Melanie; Mayer, F; Drabik, A; Poppa, Pasquale; De Angelis, Danilo; Ciaffi, Romina; Zanotti, Benedetta; Gibelli, Daniele; Cattaneo, Cristina

    2011-03-01

    Methods for the verification of the identity of offenders in cases involving video-surveillance images in criminal investigation events are currently under scrutiny by several forensic experts around the globe. The anthroposcopic, or morphological, approach based on facial features is the most frequently used by international forensic experts. However, a specific set of applicable features has not yet been agreed on by the experts. Furthermore, population frequencies of such features have not been recorded, and only few validation tests have been published. To combat and prevent crime in Europe, the European Commission funded an extensive research project dedicated to the optimization of methods for facial identification of persons on photographs. Within this research project, standardized photographs of 900 males between 20 and 31 years of age from Germany, Italy, and Lithuania were acquired. Based on these photographs, 43 facial features were described and evaluated in detail. These efforts led to the development of a new model of a morphologic atlas, called DMV atlas ("Düsseldorf Milan Vilnius," from the participating cities). This study is the first attempt at verifying the feasibility of this atlas as a preliminary step to personal identification by exploring the intra- and interobserver error. The analysis yielded mismatch percentages from 19% to 39%, which reflect the subjectivity of the approach and suggest caution in verifying personal identity only from the classification of facial features. Nonetheless, the use of the atlas leads to a significant improvement of consistency in the evaluation.

  20. Petrous apex cholesterol granuloma aeration: does it matter?

    PubMed

    Castillo, Michael P; Samy, Ravi N; Isaacson, Brandon; Roland, Peter S

    2008-04-01

    To determine whether aeration of surgically treated petrous apex cholesterol granulomas (PA CG) has any correlation with resolution of symptoms. Retrospective chart review. Twenty-six patients with a petrous apex cholesterol granuloma during a 16-year period were reviewed. Seventeen of 26 (65%) patients underwent surgical intervention. Preoperative symptoms included headache, facial weakness/twitching or numbness, vertigo, hearing loss, vision changes, and tinnitus. Postoperative symptoms resolved in 9 of the 16 patients (56%). Three patients had a postoperative headache. Facial nerve dysfunction persisted or recurred in four patients. One patient was lost to follow-up. Thirteen patients had postoperative imaging. All 13 (100%) patients demonstrated stable or increased size of PA CG with no evidence of aeration. Revision surgery was performed in four patients (25%) for facial nerve symptoms or persistent headaches. The extent of PA CG aeration on postoperative imaging had no correlation to symptom resolution or cyst enlargement. Revision surgery should not depend on imaging alone but primarily on patient symptoms and physical exam.

  1. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    ERIC Educational Resources Information Center

    Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.

    2012-01-01

    Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…

  2. The Chinese Facial Emotion Recognition Database (CFERD): a computer-generated 3-D paradigm to measure the recognition of facial emotional expressions at different intensities.

    PubMed

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2012-12-30

    The Chinese Facial Emotion Recognition Database (CFERD), a computer-generated three-dimensional (3D) paradigm, was developed to measure the recognition of facial emotional expressions at different intensities. The stimuli consisted of 3D colour photographic images of six basic facial emotional expressions (happiness, sadness, disgust, fear, anger and surprise) and neutral faces of the Chinese. The purpose of the present study is to describe the development and validation of CFERD with nonclinical healthy participants (N=100; 50 men; age ranging between 18 and 50 years), and to generate normative data set. The results showed that the sensitivity index d' [d'=Z(hit rate)-Z(false alarm rate), where function Z(p), p∈[0,1

  3. Body Image and Quality of Life in Adolescents With Craniofacial Conditions

    PubMed Central

    Crerand, Canice E.; Sarwer, David B.; Kazak, Anne E.; Clarke, Alexandra; DPsych; Rumsey, Nichola

    2017-01-01

    Objective To evaluate body image in adolescents with and without craniofacial conditions; and to examine relationships between body image and quality of life. Design Case-control design. Setting A pediatric hospital’s craniofacial center and primary care practices. Participants 70 adolescents with visible craniofacial conditions and a demographically-matched sample of 42 adolescents without craniofacial conditions. Main Outcome Measure Adolescents completed measures of quality of life and body image including satisfaction with weight, facial and overall appearance; investment in appearance (importance of appearance to self-worth); and body image disturbance (appearance-related distress and impairment in functioning). Results Adolescents with craniofacial conditions reported lower appearance investment (p < 0.001) and were more likely to report concerns about facial features (p < 0.02) compared to non-affected youth. Females in both groups reported greater investment in appearance, greater body image disturbance, and lower weight satisfaction compared to males (p < 0.01). Within both groups, greater body image disturbance was associated with lower quality of life (p <0.01). The two groups did not differ significantly on measures of quality of life, body image disturbance, or satisfaction with appearance. Conclusions Body image and quality of life in adolescents with craniofacial conditions are similar to non-affected youth. Relationships between body image and quality of life emphasize that appearance perceptions are important to adolescents’ well-being regardless of whether they have a facial disfigurement. Investment in one’s appearance may explain variations in body image satisfaction and serve as an intervention target particularly for females. PMID:26751907

  4. Misleading first impressions: different for different facial images of the same person.

    PubMed

    Todorov, Alexander; Porter, Jenny M

    2014-07-01

    Studies on first impressions from facial appearance have rapidly proliferated in the past decade. Almost all of these studies have relied on a single face image per target individual, and differences in impressions have been interpreted as originating in stable physiognomic differences between individuals. Here we show that images of the same individual can lead to different impressions, with within-individual image variance comparable to or exceeding between-individuals variance for a variety of social judgments (Experiment 1). We further show that preferences for images shift as a function of the context (e.g., selecting an image for online dating vs. a political campaign; Experiment 2), that preferences are predictably biased by the selection of the images (e.g., an image fitting a political campaign vs. a randomly selected image; Experiment 3), and that these biases are evident after extremely brief (40-ms) presentation of the images (Experiment 4). We discuss the implications of these findings for studies on the accuracy of first impressions. © The Author(s) 2014.

  5. Measuring Facial Movement

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1976-01-01

    The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)

  6. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  7. Social perception of morbidity in facial nerve paralysis.

    PubMed

    Li, Matthew Ka Ki; Niles, Navin; Gore, Sinclair; Ebrahimi, Ardalan; McGuinness, John; Clark, Jonathan Robert

    2016-08-01

    There are many patient-based and clinician-based scales measuring the severity of facial nerve paralysis and the impact on quality of life, however, the social perception of facial palsy has received little attention. The purpose of this pilot study was to measure the consequences of facial paralysis on selected domains of social perception and compare the social impact of paralysis of the different components. Four patients with typical facial palsies (global, marginal mandibular, zygomatic/buccal, and frontal) and 1 control were photographed. These images were each shown to 100 participants who subsequently rated variables of normality, perceived distress, trustworthiness, intelligence, interaction, symmetry, and disability. Statistical analysis was performed to compare the results among each palsy. Paralyzed faces were considered less normal compared to the control on a scale of 0 to 10 (mean, 8.6; 95% confidence interval [CI] = 8.30-8.86) with global paralysis (mean, 3.4; 95% CI = 3.08-3.80) rated as the most disfiguring, followed by the zygomatic/buccal (mean, 6.0; 95% CI = 5.68-6.37), marginal (mean, 6.5; 95% CI = 6.08-6.86), and then temporal palsies (mean, 6.9; 95% CI = 6.57-7.21). Similar trends were seen when analyzing these palsies for perceived distress, intelligence, and trustworthiness, using a random effects regression model. Our sample suggests that society views paralyzed faces as less normal, less trustworthy, and more distressed. Different components of facial paralysis are worse than others and surgical correction may need to be prioritized in an evidence-based manner with social morbidity in mind. © 2016 Wiley Periodicals, Inc. Head Neck 38:1158-1163, 2016. © 2016 Wiley Periodicals, Inc.

  8. Ethnic Rhinoplasty in Female Patients: The Neoclassical Canons Revisited.

    PubMed

    Saad, Ahmad; Hewett, Sierra; Nolte, Megan; Delaunay, Flore; Saad, Mariam; Cohen, Steven R

    2018-04-01

    Despite the substantial amount of research devoted to objectively defining facial attractiveness, the canons have remained a paradigm of aesthetic facial analysis, yet their omnipresence in clinical assessments revealed their limitations outside of a subset of North American Caucasians, leading to criticism about their validity as a standard of facial beauty. In an effort to introduce more objective treatment planning into ethnic rhinoplasty, we compared neoclassical canons and other current standards pertaining to nasal proportions to anatomic proportions of attractive individuals from seven different ethnic backgrounds. Beauty pageant winners (Miss Universe and Miss World nominees) between 2005 and 2015 were selected and assigned to one of seven regionally defined ethnic groups. Anteroposterior and lateral images were obtained through Google, Wikipedia, Miss Universe, and Miss World Web sites. Anthropometry of facial features was performed via Adobe Photoshop TM. Individual facial measurements were then standardized to proportions and compared to the neoclassical canons. Our data reflected an ethnic-dependent preference for the multiple fitness model. Wide-set eyes, larger mouth widths, and smaller noses were significantly relevant in Eastern Mediterranean and European ethnic groups. Exceptions lied within East African and Asian groups. As in the attractive face, the concept of the ideal nasal anatomy varies between different ethnicities. Using objective criteria and proportions of beauty to plan and execute rhinoplasty in different ethnicities can help the surgeon plan and deliver results that are in harmony with patients' individual background and facial anatomy. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  9. Atypical face shape and genomic structural variants in epilepsy

    PubMed Central

    Chinthapalli, Krishna; Bartolini, Emanuele; Novy, Jan; Suttie, Michael; Marini, Carla; Falchi, Melania; Fox, Zoe; Clayton, Lisa M. S.; Sander, Josemir W.; Guerrini, Renzo; Depondt, Chantal; Hennekam, Raoul; Hammond, Peter

    2012-01-01

    Many pathogenic structural variants of the human genome are known to cause facial dysmorphism. During the past decade, pathogenic structural variants have also been found to be an important class of genetic risk factor for epilepsy. In other fields, face shape has been assessed objectively using 3D stereophotogrammetry and dense surface models. We hypothesized that computer-based analysis of 3D face images would detect subtle facial abnormality in people with epilepsy who carry pathogenic structural variants as determined by chromosome microarray. In 118 children and adults attending three European epilepsy clinics, we used an objective measure called Face Shape Difference to show that those with pathogenic structural variants have a significantly more atypical face shape than those without such variants. This is true when analysing the whole face, or the periorbital region or the perinasal region alone. We then tested the predictive accuracy of our measure in a second group of 63 patients. Using a minimum threshold to detect face shape abnormalities with pathogenic structural variants, we found high sensitivity (4/5, 80% for whole face; 3/5, 60% for periorbital and perinasal regions) and specificity (45/58, 78% for whole face and perinasal regions; 40/58, 69% for periorbital region). We show that the results do not seem to be affected by facial injury, facial expression, intellectual disability, drug history or demographic differences. Finally, we use bioinformatics tools to explore relationships between facial shape and gene expression within the developing forebrain. Stereophotogrammetry and dense surface models are powerful, objective, non-contact methods of detecting relevant face shape abnormalities. We demonstrate that they are useful in identifying atypical face shape in adults or children with structural variants, and they may give insights into the molecular genetics of facial development. PMID:22975390

  10. Hyperspectral imaging for detection of cholesterol in human skin

    NASA Astrophysics Data System (ADS)

    Milanič, Matija; Bjorgan, Asgeir; Larsson, Marcus; Marraccini, Paolo; Strömberg, Tomas; Randeberg, Lise L.

    2015-03-01

    Hypercholesterolemia is characterized by high levels of cholesterol in the blood and is associated with an increased risk of atherosclerosis and coronary heart disease. Early detection of hypercholesterolemia is necessary to prevent onset and progress of cardiovascular disease. Optical imaging techniques might have a potential for early diagnosis and monitoring of hypercholesterolemia. In this study, hyperspectral imaging was investigated for this application. The main aim of the study was to identify spectral and spatial characteristics that can aid identification of hypercholesterolemia in facial skin. The first part of the study involved a numerical simulation of human skin affected by hypercholesterolemia. A literature survey was performed to identify characteristic morphological and physiological parameters. Realistic models were prepared and Monte Carlo simulations were performed to obtain hyperspectral images. Based on the simulations optimal wavelength regions for differentiation between normal and cholesterol rich skin were identified. Minimum Noise Fraction transformation (MNF) was used for analysis. In the second part of the study, the simulations were verified by a clinical study involving volunteers with elevated and normal levels of cholesterol. The faces of the volunteers were scanned by a hyperspectral camera covering the spectral range between 400 nm and 720 nm, and characteristic spectral features of the affected skin were identified. Processing of the images was done after conversion to reflectance and masking of the images. The identified features were compared to the known cholesterol levels of the subjects. The results of this study demonstrate that hyperspectral imaging of facial skin can be a promising, rapid modality for detection of hypercholesterolemia.

  11. Sectional anatomy aid for improvement of decompression surgery approach to vertical segment of facial nerve.

    PubMed

    Feng, Yan; Zhang, Yi Qun; Liu, Min; Jin, Limin; Huangfu, Mingmei; Liu, Zhenyu; Hua, Peiyan; Liu, Yulong; Hou, Ruida; Sun, Yu; Li, You Qiong; Wang, Yu Fa; Feng, Jia Chun

    2012-05-01

    The aim of this study was to find a surgical approach to a vertical segment of the facial nerve (VFN) with a relatively wide visual field and small lesion by studying the location and structure of VFN with cross-sectional anatomy. High-resolution spiral computed tomographic multiplane reformation was used to reform images that were parallel to the Frankfort horizontal plane. To locate the VFN, we measured the distances as follows: from the VFN to the paries posterior bony external acoustic meatus on 5 typical multiplane reformation images, to the promontorium tympani and the root of the tympanic ring on 2 typical images. The mean distances from the VFN to the paries posterior bony external acoustic meatus are as follows: 4.47 mm on images showing the top of the external acoustic meatus, 4.20 mm on images with the best view of the window niche, 3.35 mm on images that show the widest external acoustic meatus, 4.22 mm on images with the inferior margin of the sulcus tympanicus, and 5.49 mm on images that show the bottom of the external acoustic meatus. The VFN is approximately 4.20 mm lateral to the promontorium tympani on images with the best view of the window niche and 4.12 mm lateral to the root of the tympanic ring on images with the inferior margin of the sulcus tympanicus. The other results indicate that the area and depth of the surgical wound from the improved approach would be much smaller than that from the typical approach. The surgical approach to the horizontal segment of the facial nerve through the external acoustic meatus and the tympanic cavity could be improved by grinding off the external acoustic meatus to show the VFN. The VFN can be found by taking the promontorium tympani and tympanic ring as references. This improvement is of high potential to expand the visual field to the facial nerve, remarkably without significant injury to the patients compared with the typical approach through the mastoid process.

  12. Improved facial nerve identification during parotidectomy with fluorescently labeled peptide.

    PubMed

    Hussain, Timon; Nguyen, Linda T; Whitney, Michael; Hasselmann, Jonathan; Nguyen, Quyen T

    2016-12-01

    Additional intraoperative guidance could reduce the risk of iatrogenic injury during parotid gland cancer surgery. We evaluated the intraoperative use of fluorescently labeled nerve binding peptide NP41 to aid facial nerve identification and preservation during parotidectomy in an orthotopic model of murine parotid gland cancer. We also quantified the accuracy of intraoperative nerve detection for surface and buried nerves in the head and neck with NP41 versus white light (WL) alone. Twenty-eight mice underwent parotid gland cancer surgeries with additional fluorescence (FL) guidance versus WL reflectance (WLR) alone. Eight mice were used for additional nerve-imaging experiments. Twenty-eight parotid tumor-bearing mice underwent parotidectomy. Eight mice underwent imaging of both sides of the face after skin removal. Postoperative assessment of facial nerve function measured by automated whisker tracking were compared between FL guidance (n = 13) versus WL alone (n=15). In eight mice, nerve to surrounding tissue contrast was measured under FL versus WLR for all nerve branches detectable in the field of view. Postoperative facial nerve function after parotid gland cancer surgery tended to be better with additional FL guidance. Fluorescent labeling significantly improved nerve to surrounding tissue contrast for both large and smaller buried nerve branches compared to WLR visualization and improved detection sensitivity and specificity. NP41 FL imaging significantly aids the intraoperative identification of nerve braches otherwise nearly invisible to the naked eye. Its application in a murine model of parotid gland cancer surgery tended to improve functional preservation of the facial nerve. NA Laryngoscope, 126:2711-2717, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Improved Facial Nerve Identification During Parotidectomy With Fluorescently Labeled Peptide

    PubMed Central

    Hussain, Timon; Nguyen, Linda T.; Whitney, Michael; Hasselmann, Jonathan; Nguyen, Quyen T.

    2016-01-01

    Objectives/Hypothesis Additional intraoperative guidance could reduce the risk of iatrogenic injury during parotid gland cancer surgery. We evaluated the intraoperative use of fluorescently labeled nerve binding peptide NP41 to aid facial nerve identification and preservation during parotidectomy in an orthotopic model of murine parotid gland cancer. We also quantified the accuracy of intraoperative nerve detection for surface and buried nerves in the head and neck with NP41 versus white light (WL) alone. Study Design Twenty-eight mice underwent parotid gland cancer surgeries with additional fluorescence (FL) guidance versus WL reflectance (WLR) alone. Eight mice were used for additional nerve-imaging experiments. Methods Twenty-eight parotid tumor-bearing mice underwent parotidectomy. Eight mice underwent imaging of both sides of the face after skin removal. Postoperative assessment of facial nerve function measured by automated whisker tracking were compared between FL guidance (n = 13) versus WL alone (n = 15). In eight mice, nerve to surrounding tissue contrast was measured under FL versus WLR for all nerve branches detectable in the field of view. Results Postoperative facial nerve function after parotid gland cancer surgery tended to be better with additional FL guidance. Fluorescent labeling significantly improved nerve to surrounding tissue contrast for both large and smaller buried nerve branches compared to WLR visualization and improved detection sensitivity and specificity. Conclusions NP41 FL imaging significantly aids the intraoperative identification of nerve braches otherwise nearly invisible to the naked eye. Its application in a murine model of parotid gland cancer surgery tended to improve functional preservation of the facial nerve. PMID:27171862

  14. The perception of children's computer-imaged facial profiles by patients, mothers and clinicians.

    PubMed

    Miner, Robert M; Anderson, Nina K; Evans, Carla A; Giddon, Donald B

    2007-11-01

    To demonstrate the usefulness of a new imaging system for comparing the morphometric bases of children's self-perception of their facial profile with the perceptions of their mothers and treating clinicians. Rather than choosing among a series of static images, a computer imaging program was developed to elicit a range of acceptable responses or tolerance for change from which a midpoint of acceptability was derived. Using the method of Giddon et al, three profile features (upper and lower lips and mandible) from standardized images of 24 patients aged 8- 15 years were distorted and presented to patients, parents, and clinicians in random order as slowly moving images (four frames per second) from retrusive and protrusive extremes. Subjects clicked the mouse when the image became acceptable and released it when it was no longer acceptable. Subjects responded similarly to a neutral facial profile. Patients and their mothers overestimated the protrusiveness of the mandible of the actual pretreatment profile. Consistent with related studies, mothers had a smaller tolerance for change in the soft tissue profile than the children or clinicians. The magnitudes of the children's self-preference and preferred change in a neutral face were also significantly correlated. Both patients and mothers preferred a more protrusive profile than that of the actual or neutral face for the patient and neutral face. Imaging software can be used with children to compare their preferences with those of parents and clinicians to facilitate treatment planning and patient satisfaction.

  15. The effect of image quality and forensic expertise in facial image comparisons.

    PubMed

    Norell, Kristin; Läthén, Klas Brorsson; Bergström, Peter; Rice, Allyson; Natu, Vaidehi; O'Toole, Alice

    2015-03-01

    Images of perpetrators in surveillance video footage are often used as evidence in court. In this study, identification accuracy was compared for forensic experts and untrained persons in facial image comparisons as well as the impact of image quality. Participants viewed thirty image pairs and were asked to rate the level of support garnered from their observations for concluding whether or not the two images showed the same person. Forensic experts reached their conclusions with significantly fewer errors than did untrained participants. They were also better than novices at determining when two high-quality images depicted the same person. Notably, lower image quality led to more careful conclusions by experts, but not for untrained participants. In summary, the untrained participants had more false negatives and false positives than experts, which in the latter case could lead to a higher risk of an innocent person being convicted for an untrained witness. © 2014 American Academy of Forensic Sciences.

  16. A pediatric case with peripheral facial nerve palsy caused by a granulomatous lesion associated with cat scratch disease.

    PubMed

    Nakamura, Chizuko; Inaba, Yuji; Tsukahara, Keiko; Mochizuki, Mie; Sawanobori, Emi; Nakazawa, Yozo; Aoyama, Kouki

    2018-02-01

    Cat scratch disease is a common infectious disorder caused by Bartonella henselae that is transmitted primarily by kittens. It typically exhibits a benign and self-limiting course of subacute regional lymphadenopathy and fever lasting two to eight weeks. The most severe complication of cat scratch disease is involvement of the nervous system, such as encephalitis, meningitis, and polyneuritis. Peripheral facial nerve palsy associated with Bartonella infection is rare; few reported pediatric and adult cases exist and the precise pathogenesis is unknown. A previously healthy 7-year-old boy presented with fever, cervical lymphadenopathy, and peripheral facial nerve palsy associated with serologically confirmed cat scratch disease. The stapedius muscle reflex was absent on the left side and brain magnetic resonance imaging revealed a mass lesion at the left internal auditory meatus. The patient's symptoms and imaging findings were gradually resolved after the antibiotics and corticosteroids treatment. The suspected granulomatous lesion was considered to have resulted from the host's immune reaction to Bartonella infection and impaired the facial nerve. This is the first case report providing direct evidence of peripheral facial nerve palsy caused by a suspected granulomatous lesion associated with cat scratch disease and its treatment course. Copyright © 2017. Published by Elsevier B.V.

  17. Quantification of age-related facial wrinkles in men and women using a three-dimensional fringe projection method and validated assessment scales.

    PubMed

    Luebberding, Stefanie; Krueger, Nils; Kerscher, Martina

    2014-01-01

    Whereas the molecular mechanisms of skin aging are well understood, little information is available concerning the clinical onset and lifetime development of facial wrinkles. To perform the first systematic evaluation of the lifetime development of facial wrinkles and sex-specific differences using three-dimensional (3D) imaging and clinical rating. 200 men and women aged 20 to 70 were selected. Wrinkle severity of periorbital, glabellar, and forehead lines was evaluated using 3D imaging and validated assessment scales. Wrinkle severity was greater at all assessed locations with older age. In men, wrinkles manifested earlier and were more severe than in women. In women, periorbital lines were the first visible wrinkles, in contrast to the forehead lines in men. In both sexes, glabellar lines did not clinically manifest before the age of 40. The results of the present study confirm a progressive increase of crow's feet and forehead and glabellar lines in men and women. Although the development of facial wrinkles happens earlier and is more severe in men, perimenopause seems to particularly affect development in women. Clinical ratings and 3D measurements are suitable methods to assess facial wrinkle severity in men and women. © 2013 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.

  18. Evaluation of psychological stress in confined environments using salivary, skin, and facial image parameters.

    PubMed

    Egawa, Mariko; Haze, Shinichiro; Gozu, Yoko; Hosoi, Junichi; Onodera, Tomoko; Tojo, Yosuke; Katsuyama, Masako; Hara, Yusuke; Katagiri, Chika; Inoue, Natsuhiko; Furukawa, Satoshi; Suzuki, Go

    2018-05-29

    Detecting the influence of psychological stress is particularly important in prolonged space missions. In this study, we determined potential markers of psychological stress in a confined environment. We examined 23 Japanese subjects staying for 2 weeks in a confined facility at Tsukuba Space Center, measuring salivary, skin, and facial image parameters. Saliva was collected at four points in a single day to detect diurnal variation. Increases in salivary cortisol were detected after waking up on the 4th and 11th days, and at 15:30 on the 1st and in the second half of the stay. Transepidermal water loss (TEWL) and sebum content of the skin were higher compared with outside the facility on the 4th and 1st days respectively. Increased IL-1β in the stripped stratum corneum was observed on the 14th day, and 7 days after leaving. Differences in facial expression symmetry at the time of facial expression changes were observed on 11th and 14th days. Thus, we detected a transition of psychological stress using salivary cortisol profiles and skin physiological parameters. The results also suggested that IL-1β in the stripped stratum corneum and facial expression symmetry are possible novel markers for conveniently detecting psychological stress.

  19. The Boomerang Lift: A Three-Step Compartment-Based Approach to the Youthful Cheek.

    PubMed

    Schreiber, Jillian E; Terner, Jordan; Stern, Carrie S; Beut, Javier; Jelks, Elizabeth B; Jelks, Glenn W; Tepper, Oren M

    2018-04-01

    Autologous fat grafting is an important tool for plastic surgeons treating the aging face. Malar augmentation with fat is often targeted to restore the youthful facial contour and provides support to the lower eyelid. The existence of distinct facial fat compartments suggests that a stepwise approach may be appropriate in this regard. The authors describe a three-step approach to malar augmentation using targeted deep malar fat compartmental augmentation, termed the "boomerang lift." Clinical patients undergoing autologous fat grafting for malar augmentation were injected in three distinct deep malar fat compartments: the lateral sub-orbicularis oculi fat, the medial sub-orbicularis oculi fat, and the deep medial cheek (n = 9). Intraoperative three-dimensional images were taken at baseline and following compartmental injections (Canfield VECTRA H1). Images were overlaid between the augmented and baseline captures, and the three-dimensional surface changes were analyzed, which represented the resulting "augmentation zone." Three-dimensional analysis demonstrated a unique pattern for the augmentation zone consistent across patients. The augmentation zone resembled a boomerang, with the short tail supporting the medial lower lid and the long tail extending laterally along the zygomatic arch. The upper border was restricted by the level of the nasojugal interface, and the lower border was defined medially by the nasolabial fold and laterally by the level of the zygomaticocutaneous ligament. Lateral and medial sub-orbicularis oculi fat injections defined the boundaries of the boomerang shape, and injection to the deep medial cheek provided maximum projection. This is the first description of deep malar augmentation zones in clinical patients. Three-dimensional surface imaging was ideal for analyzing the surface change in response to targeted facial fat grafting. The authors' technique resulted in a reproducible surface shape, which they term the boomerang lift.

  20. An introductory analysis of digital infrared thermal imaging guided oral cancer detection using multiresolution rotation invariant texture features

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Das Gupta, R.; Mukhopadhyay, S.; Anjum, N.; Patsa, S.; Ray, J. G.

    2017-03-01

    This manuscript presents an analytical treatment on the feasibility of multi-scale Gabor filter bank response for non-invasive oral cancer pre-screening and detection in the long infrared spectrum. Incapability of present healthcare technology to detect oral cancer in budding stage manifests in high mortality rate. The paper contributes a step towards automation in non-invasive computer-aided oral cancer detection using an amalgamation of image processing and machine intelligence paradigms. Previous works have shown the discriminative difference of facial temperature distribution between a normal subject and a patient. The proposed work, for the first time, exploits this difference further by representing the facial Region of Interest(ROI) using multiscale rotation invariant Gabor filter bank responses followed by classification using Radial Basis Function(RBF) kernelized Support Vector Machine(SVM). The proposed study reveals an initial increase in classification accuracy with incrementing image scales followed by degradation of performance; an indication that addition of more and more finer scales tend to embed noisy information instead of discriminative texture patterns. Moreover, the performance is consistently better for filter responses from profile faces compared to frontal faces.This is primarily attributed to the ineptness of Gabor kernels to analyze low spatial frequency components over a small facial surface area. On our dataset comprising of 81 malignant, 59 pre-cancerous, and 63 normal subjects, we achieve state-of-the-art accuracy of 85.16% for normal v/s precancerous and 84.72% for normal v/s malignant classification. This sets a benchmark for further investigation of multiscale feature extraction paradigms in IR spectrum for oral cancer detection.

  1. Three-Dimensional Topographic Surface Changes in Response to Compartmental Volumization of the Medial Cheek: Defining a Malar Augmentation Zone.

    PubMed

    Stern, Carrie S; Schreiber, Jillian E; Surek, Chris C; Garfein, Evan S; Jelks, Elizabeth B; Jelks, Glenn W; Tepper, Oren M

    2016-05-01

    Given the widespread use of facial fillers and recent identification of distinct facial fat compartments, a better understanding of three-dimensional surface changes in response to volume augmentation is needed. Advances in three-dimensional imaging technology now afford an opportunity to elucidate these morphologic changes for the first time. A cadaver study was undertaken in which volumization of the deep medial cheek compartment was performed at intervals up to 4 cc (n = 4). Three-dimensional photographs were taken after each injection to analyze the topographic surface changes, which the authors define as the "augmentation zone." Perimeter, diameter, and projection were studied. The arcus marginalis of the inferior orbit consistently represented a fixed boundary of the augmentation zone, and additional cadavers underwent similar volumization following surgical release of this portion of the arcus marginalis (n = 4). Repeated three-dimensional computer analysis was performed comparing the augmentation zone with and without arcus marginalis release. Volumization of the deep medial cheek led to unique topographic changes of the malar region defined by distinct boundaries. Interestingly, the cephalic border of the augmentation zone was consistently noted to be at the level of the arcus marginalis in all specimens. When surgical release of the arcus marginalis was performed, the cephalic border of the augmentation zone was no longer restricted. Using advances in three-dimensional photography and computer analysis, the authors demonstrate characteristic surface anatomy changes in response to volume augmentation of facial compartments. This novel concept of the augmentation zone can be applied to volumization of other distinct facial regions. Therapeutic, V.

  2. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    PubMed

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  3. More emotional facial expressions during episodic than during semantic autobiographical retrieval.

    PubMed

    El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis

    2016-04-01

    There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.

  4. Multiple mechanisms in the perception of face gender: Effect of sex-irrelevant features.

    PubMed

    Komori, Masashi; Kawamura, Satoru; Ishihara, Shigekazu

    2011-06-01

    Effects of sex-relevant and sex-irrelevant facial features on the evaluation of facial gender were investigated. Participants rated masculinity of 48 male facial photographs and femininity of 48 female facial photographs. Eighty feature points were measured on each of the facial photographs. Using a generalized Procrustes analysis, facial shapes were converted into multidimensional vectors, with the average face as a starting point. Each vector was decomposed into a sex-relevant subvector and a sex-irrelevant subvector which were, respectively, parallel and orthogonal to the main male-female axis. Principal components analysis (PCA) was performed on the sex-irrelevant subvectors. One principal component was negatively correlated with both perceived masculinity and femininity, and another was correlated only with femininity, though both components were orthogonal to the male-female dimension (and thus by definition sex-irrelevant). These results indicate that evaluation of facial gender depends on sex-irrelevant as well as sex-relevant facial features.

  5. Fourier power spectrum characteristics of face photographs: attractiveness perception depends on low-level image properties.

    PubMed

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.

  6. Fourier Power Spectrum Characteristics of Face Photographs: Attractiveness Perception Depends on Low-Level Image Properties

    PubMed Central

    Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks – but not veridical face photographs – affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess – compared to face images – a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope – in contrast to the other tested image properties – did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis. PMID:25835539

  7. Geometric Evaluation of the Effect of Prosthetic Rehabilitation on the Facial Appearance of Mandibulectomy Patients: A Preliminary Study.

    PubMed

    Aswehlee, Amel M; Elbashti, Mahmoud E; Hattori, Mariko; Sumita, Yuka I; Taniguchi, Hisashi

    The purpose of this study was to geometrically evaluate the effect of prosthetic rehabilitation on the facial appearance of mandibulectomy patients. Facial scans (with and without prostheses) were performed for 16 mandibulectomy patients using a noncontact three-dimensional (3D) digitizer, and 3D images were reconstructed with the corresponding software. The 3D datasets were geometrically evaluated and compared using 3D evaluation software. The mean difference in absolute 3D deviations for full face scans was 382.2 μm. This method may be useful in evaluating the effect of conventional prostheses on the facial appearance of individuals with mandibulectomy defects.

  8. Promising Technique for Facial Nerve Reconstruction in Extended Parotidectomy

    PubMed Central

    Villarreal, Ithzel Maria; Rodríguez-Valiente, Antonio; Castelló, Jose Ramon; Górriz, Carmen; Montero, Oscar Alvarez; García-Berrocal, Jose Ramon

    2015-01-01

    Introduction: Malignant tumors of the parotid gland account scarcely for 5% of all head and neck tumors. Most of these neoplasms have a high tendency for recurrence, local infiltration, perineural extension, and metastasis. Although uncommon, these malignant tumors require complex surgical treatment sometimes involving a total parotidectomy including a complete facial nerve resection. Severe functional and aesthetic facial defects are the result of a complete sacrifice or injury to isolated branches becoming an uncomfortable distress for patients and a major challenge for reconstructive surgeons. Case Report: A case of a 54-year-old, systemically healthy male patient with a 4 month complaint of pain and swelling on the right side of the face is presented. The patient reported a rapid increase in the size of the lesion over the past 2 months. Imaging tests and histopathological analysis reported an adenoid cystic carcinoma. A complete parotidectomy was carried out with an intraoperative notice of facial nerve infiltration requiring a second intervention for nerve and defect reconstruction. A free ALT flap with vascularized nerve grafts was the surgical choice. A 6 month follow-up showed partial facial movement recovery and the facial defect mended. Conclusion: It is of critical importance to restore function to patients with facial nerve injury. Vascularized nerve grafts, in many clinical and experimental studies, have shown to result in better nerve regeneration than conventional non-vascularized nerve grafts. Nevertheless, there are factors that may affect the degree, speed and regeneration rate regarding the free fasciocutaneous flap. In complex head and neck defects following a total parotidectomy, the extended free fasciocutaneous ALT (anterior-lateral thigh) flap with a vascularized nerve graft is ideally suited for the reconstruction of the injured site. Donor–site morbidity is low and additional surgical time is minimal compared with the time of a single ALT flap transfer. PMID:26788494

  9. Facial Indicators of Positive Emotions in Rats

    PubMed Central

    Finlayson, Kathryn; Lampe, Jessica Frances; Hintze, Sara; Würbel, Hanno; Melotti, Luca

    2016-01-01

    Until recently, research in animal welfare science has mainly focused on negative experiences like pain and suffering, often neglecting the importance of assessing and promoting positive experiences. In rodents, specific facial expressions have been found to occur in situations thought to induce negatively valenced emotional states (e.g., pain, aggression and fear), but none have yet been identified for positive states. Thus, this study aimed to investigate if facial expressions indicative of positive emotional state are exhibited in rats. Adolescent male Lister Hooded rats (Rattus norvegicus, N = 15) were individually subjected to a Positive and a mildly aversive Contrast Treatment over two consecutive days in order to induce contrasting emotional states and to detect differences in facial expression. The Positive Treatment consisted of playful manual tickling administered by the experimenter, while the Contrast Treatment consisted of exposure to a novel test room with intermittent bursts of white noise. The number of positive ultrasonic vocalisations was greater in the Positive Treatment compared to the Contrast Treatment, indicating the experience of differentially valenced states in the two treatments. The main findings were that Ear Colour became significantly pinker and Ear Angle was wider (ears more relaxed) in the Positive Treatment compared to the Contrast Treatment. All other quantitative and qualitative measures of facial expression, which included Eyeball height to width Ratio, Eyebrow height to width Ratio, Eyebrow Angle, visibility of the Nictitating Membrane, and the established Rat Grimace Scale, did not show differences between treatments. This study contributes to the exploration of positive emotional states, and thus good welfare, in rats as it identified the first facial indicators of positive emotions following a positive heterospecific play treatment. Furthermore, it provides improvements to the photography technique and image analysis for the detection of fine differences in facial expression, and also adds to the refinement of the tickling procedure. PMID:27902721

  10. Real-time Avatar Animation from a Single Image.

    PubMed

    Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F

    2011-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

  11. Real-time Avatar Animation from a Single Image

    PubMed Central

    Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.

    2014-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812

  12. Hair product artifact in magnetic resonance imaging.

    PubMed

    Chenji, Sneha; Wilman, Alan H; Mah, Dennell; Seres, Peter; Genge, Angela; Kalra, Sanjay

    2017-01-01

    The presence of metallic compounds in facial cosmetics and permanent tattoos may affect the quality of magnetic resonance imaging. We report a case study describing a signal artifact due to the use of a leave-on powdered hair dye. On reviewing the ingredients of the product, it was found to contain several metallic compounds. In lieu of this observation, we suggest that MRI centers include the use of metal- or mineral-based facial cosmetics or hair products in their screening protocols. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Diagnosis of Bell palsy with gadolinium magnetic resonance imaging.

    PubMed

    Becelli, R; Perugini, M; Carboni, A; Renzi, G

    2003-01-01

    Bell palsy is a condition resulting from a peripheral edematous compression on the nervous fibers of the facial nerve. This pathological condition often has clinical characteristics of no importance and spontaneously disappears in a short time in a high percentage of cases. Facial palsy concerning cranial nerve VII can also be caused by other conditions such as mastoid fracture, acoustic neurinoma, tumor spread to the temporal lobe (e.g., cholesteatoma), neoformation of the parotid gland, Melkersson-Rosenthal syndrome, and Ramsay-Hunt syndrome. Therefore, it is important to adopt an accurate diagnostic technique allowing the rapid detection of Bell palsy and the exclusion of causes of facial paralysis requiring surgical treatment. Magnetic resonance imaging (MRI) with medium contrast of the skull shows a marked increase in revealing lesions, even of small dimensions, inside the temporal bone and at the cerebellopontine angle. The authors present a clinical case to show the important role played by gadolinium MRI in reaching a diagnosis of Bell palsy in the differential diagnosis of the various conditions that determine paralysis of the facial nerve and in selecting the most suitable treatment or surgery to be adopted.

  14. Selective attention toward female secondary sexual color in male rhesus macaques.

    PubMed

    Waitt, Corri; Gerald, Melissa S; Little, Anthony C; Kraiselburd, Edmundo

    2006-07-01

    Pink-to-red anogenital and facial sexual skin occurs in females of many primate species. Since female sexual skin color varies with reproductive state, it has long been assumed that color acts to stimulate male sexual interest. Although there is supportive evidence for this as regards anogenital skin, it is unclear whether this is also the case for facial sexual skin. In this study we experimentally manipulated digital facial and hindquarter images of female rhesus macaques (Macaca mulatta) for color within the natural range of variation. The images were presented to adult male conspecifics to assess whether the males exhibited visual preferences for red vs. non-red female coloration, and whether preferences varied with anatomical region. The males displayed significantly longer gaze durations in response to reddened versions of female hindquarters, but not to reddened versions of faces. This suggests that female facial coloration may serve an alternative purpose to that of attracting males, and that the signal function of sexual skin and the intended recipients may vary across anatomical regions. (c) 2005 Wiley-Liss, Inc.

  15. Individual differences in Scanpaths correspond with serotonin transporter genotype and behavioral phenotype in rhesus monkeys (Macaca mulatta).

    PubMed

    Gibboni, Robert R; Zimmerman, Prisca E; Gothard, Katalin M

    2009-01-01

    Scanpaths (the succession of fixations and saccades during spontaneous viewing) contain information about the image but also about the viewer. To determine the viewer-dependent factors in the scanpaths of monkeys, we trained three adult males (Macaca mulatta) to look for 3 s at images of conspecific facial expressions with either direct or averted gaze. The subjects showed significant differences on four basic scanpath parameters (number of fixations, fixation duration, saccade length, and total scanpath length) when viewing the same facial expression/gaze direction combinations. Furthermore, we found differences between monkeys in feature preference and in the temporal order in which features were visited on different facial expressions. Overall, the between-subject variability was larger than the within- subject variability, suggesting that scanpaths reflect individual preferences in allocating visual attention to various features in aggressive, neutral, and appeasing facial expressions. Individual scanpath characteristics were brought into register with the genotype for the serotonin transporter regulatory gene (5-HTTLPR) and with behavioral characteristics such as expression of anticipatory anxiety and impulsiveness/hesitation in approaching food in the presence of a potentially dangerous object.

  16. Simultaneous acquisition of corrugator electromyography and functional magnetic resonance imaging: A new method for objectively measuring affect and neural activity concurrently

    PubMed Central

    Heller, Aaron S.; Greischar, Lawrence L; Honor, Ann; Anderle, Michael J; Davidson, Richard J.

    2011-01-01

    The development of functional neuroimaging of emotion holds the promise to enhance our understanding of the biological bases of affect and improve our knowledge of psychiatric diseases. However, up to this point, researchers have been unable to objectively, continuously and unobtrusively measure the intensity and dynamics of affect concurrently with functional magnetic resonance imaging (fMRI). This has hindered the development and generalizability of our field. Facial electromyography (EMG) is an objective, reliable, valid, sensitive, and unobtrusive measure of emotion. Here, we report the successful development of a method for simultaneously acquiring fMRI and facial EMG. The ability to simultaneously acquire brain activity and facial physiology will allow affective neuroscientists to address theoretical, psychiatric, and individual difference questions in a more rigorous and generalizable way. PMID:21742043

  17. Mathematical problems in the application of multilinear models to facial emotion processing experiments

    NASA Astrophysics Data System (ADS)

    Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.

    2000-10-01

    In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.

  18. (abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, Kenneth C.

    1994-01-01

    We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.

  19. Effects of Orientation on Recognition of Facial Affect

    NASA Technical Reports Server (NTRS)

    Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)

    1997-01-01

    The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.

  20. Facial Soft Tissue Thickness of Midline in an Iranian Sample: MRI Study.

    PubMed

    Johari, Masume; Esmaeili, Farzad; Hamidi, Hadi

    2017-01-01

    To identify human skeletal remains, different methods can be used and using these techniques, important data can be obtained. However, facial reconstruction is the last method to indentify unknown human faces which requires knowledge about facial soft tissue thickness in the different positions of the face. The present study determined the facial soft tissue thickness in the different landmark points on the MRI images of patients referred to Radiology Department of Shahid Madani Hospital. In this descriptive cross-sectional trial, MRI images of 179 patients (61 males, 118 females) in the age range of 18-76 years old who did not show any pathologic lesions, were selected. The measurements of the facial soft tissue were done on 12 landmark points on the midline area by two radiologist observers using specific software on the images. The differences in the soft tissue thickness in these landmark points were statistically analyzed by Mann-Whitney U (in term of gender) and Kruskal-Wallis tests (in terms of Body Mass Index [BMI] and age groups). P value less than 0.05 was considered statistically significant. The data were compared with the results of other studies. The results obtained in the present study were higher than Turkish and American studies in most of the landmark points. Facial soft tissue thickness in most of the landmarks was more in males than females. In some of the landmarks, significant differences were found between emaciated, normal and overweight patients while in most cases, soft tissue thickness increased with the increased BMI. In some cases, significant differences were noted between soft tissue thickness values among the different age groups, in which the thickness increased or thinned with the increased age. There were statistically significant associations between the presence and surface area of Haller cells and the occurrence of ipsilateral maxillary sinusitis. Neither the angulation of the uncinate process nor the size of the maxillary sinus ostium significantly correlates with the formation of maxillary sinusitis. The data achieved in the present study can be used for the facial reconstruction purposes in the Iranian population; however, the slight differences existing between the studied population and other subgroup races must be considered for accurate reconstructions.

  1. An Analysis of Biometric Technology as an Enabler to Information Assurance

    DTIC Science & Technology

    2005-03-01

    29 Facial Recognition ................................................................................................ 30...al., 2003) Facial Recognition Facial recognition systems are gaining momentum as of late. The reason for this is that facial recognition systems...the traffic camera on the street corner, video technology is everywhere. There are a couple of different methods currently being used for facial

  2. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. The occipital face area is causally involved in the formation of identity-specific face representations.

    PubMed

    Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula

    2017-12-01

    Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.

  4. Modeling first impressions from highly variable facial images.

    PubMed

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  5. Fusiform gyrus volume reduction and facial recognition in chronic schizophrenia.

    PubMed

    Onitsuka, Toshiaki; Shenton, Martha E; Kasai, Kiyoto; Nestor, Paul G; Toner, Sarah K; Kikinis, Ron; Jolesz, Ferenc A; McCarley, Robert W

    2003-04-01

    The fusiform gyrus (FG), or occipitotemporal gyrus, is thought to subserve the processing and encoding of faces. Of note, several studies have reported that patients with schizophrenia show deficits in facial processing. It is thus hypothesized that the FG might be one brain region underlying abnormal facial recognition in schizophrenia. The objectives of this study were to determine whether there are abnormalities in gray matter volumes for the anterior and the posterior FG in patients with chronic schizophrenia and to investigate relationships between FG subregions and immediate and delayed memory for faces. Patients were recruited from the Boston VA Healthcare System, Brockton Division, and control subjects were recruited through newspaper advertisement. Study participants included 21 male patients diagnosed as having chronic schizophrenia and 28 male controls. Participants underwent high-spatial-resolution magnetic resonance imaging, and facial recognition memory was evaluated. Main outcome measures included anterior and posterior FG gray matter volumes based on high-spatial-resolution magnetic resonance imaging, a detailed and reliable manual delineation using 3-dimensional information, and correlation coefficients between FG subregions and raw scores on immediate and delayed facial memory derived from the Wechsler Memory Scale III. Patients with chronic schizophrenia had overall smaller FG gray matter volumes (10%) than normal controls. Additionally, patients with schizophrenia performed more poorly than normal controls in both immediate and delayed facial memory tests. Moreover, the degree of poor performance on delayed memory for faces was significantly correlated with the degree of bilateral anterior FG reduction in patients with schizophrenia. These results suggest that neuroanatomic FG abnormalities underlie at least some of the deficits associated with facial recognition in schizophrenia.

  6. Facial asymmetry quantitative evaluation in oculoauriculovertebral spectrum.

    PubMed

    Manara, Renzo; Schifano, Giovanni; Brotto, Davide; Mardari, Rodica; Ghiselli, Sara; Gerunda, Antonio; Ghirotto, Cristina; Fusetti, Stefano; Piacentile, Katherine; Scienza, Renato; Ermani, Mario; Martini, Alessandro

    2016-03-01

    Facial asymmetries in oculoauriculovertebral spectrum (OAVS) patients might require surgical corrections that are mostly based on qualitative approach and surgeon's experience. The present study aimed to develop a quantitative 3D CT imaging-based procedure suitable for maxillo-facial surgery planning in OAVS patients. Thirteen OAVS patients (mean age 3.5 ± 4.0 years; range 0.2-14.2, 6 females) and 13 controls (mean age 7.1 ± 5.3 years; range 0.6-15.7, 5 females) who underwent head CT examination were retrospectively enrolled. Eight bilateral anatomical facial landmarks were defined on 3D CT images (porion, orbitale, most anterior point of frontozygomatic suture, most superior point of temporozygomatic suture, most posterior-lateral point of the maxilla, gonion, condylion, mental foramen) and distance from orthogonal planes (in millimeters) was used to evaluate the asymmetry on each axis and to calculate a global asymmetry index of each anatomical landmark. Mean asymmetry values and relative confidence intervals were obtained from the control group. OAVS patients showed 2.5 ± 1.8 landmarks above the confidence interval while considering the global asymmetry values; 12 patients (92%) showed at least one pathologically asymmetric landmark. Considering each axis, the mean number of pathologically asymmetric landmarks increased to 5.5 ± 2.6 (p = 0.002) and all patients presented at least one significant landmark asymmetry. Modern CT-based 3D reconstructions allow accurate assessment of facial bone asymmetries in patients affected by OAVS. The evaluation as a global score and in different orthogonal axes provides precise quantitative data suitable for maxillo-facial surgical planning. CT-based 3D reconstruction might allow a quantitative approach for planning and following-up maxillo-facial surgery in OAVS patients.

  7. Photoanthropometric face iridial proportions for age estimation: An investigation using features selected via a joint mutual information criterion.

    PubMed

    Borges, Díbio L; Vidal, Flávio B; Flores, Marta R P; Melani, Rodolfo F H; Guimarães, Marco A; Machado, Carlos E P

    2018-03-01

    Age assessment from images is of high interest in the forensic community because of the necessity to establish formal protocols to identify child pornography, child missing and abuses where visual evidences are the mostly admissible. Recently, photoanthropometric methods have been found useful for age estimation correlating facial proportions in image databases with samples of some age groups. Notwithstanding the advances, newer facial features and further analysis are needed to improve accuracy and establish larger applicability. In this investigation, frontal images of 1000 individuals (500 females, 500 males), equally distributed in five age groups (6, 10, 14, 18, 22 years old) were used in a 10 fold cross-validated experiment for three age thresholds classifications (<10, <14, <18 years old). A set of novel 40 features, based on a relation between landmark distances and the iris diameter, is proposed and joint mutual information is used to select the most relevant and complementary features for the classification task. In a civil image identification database with diverse ancestry, receiver operating characteristic (ROC) curves were plotted to verify accuracy, and the resultant AUCs achieved 0.971, 0.969, and 0.903 for the age classifications (<10, <14, <18 years old), respectively. These results add support to continuing research in age assessment from images using the metric approach. Still, larger samples are necessary to evaluate reliability in extensive conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Facial soft tissue thickness in skeletal type I Japanese children.

    PubMed

    Utsuno, Hajime; Kageyama, Toru; Deguchi, Toshio; Umemura, Yasunobu; Yoshino, Mineo; Nakamura, Hiroshi; Miyazawa, Hiroo; Inoue, Katsuhiro

    2007-10-25

    Facial reconstruction techniques used in forensic anthropology require knowledge of the facial soft tissue thickness of each race if facial features are to be reconstructed correctly. If this is inaccurate, so also will be the reconstructed face. Knowledge of differences by age and sex are also required. Therefore, when unknown human skeletal remains are found, the forensic anthropologist investigates for race, sex, and age, and for other variables of relevance. Cephalometric X-ray images of living persons can help to provide this information. They give an approximately 10% enlargement from true size and can demonstrate the relationship between soft and hard tissue. In the present study, facial soft tissue thickness in Japanese children was measured at 12 anthropological points using X-ray cephalometry in order to establish a database for facial soft tissue thickness. This study of both boys and girls, aged from 6 to 18 years, follows a previous study of Japanese female children only, and focuses on facial soft tissue thickness in only one skeletal type. Sex differences in thickness of tissue were found from 12 years of age upwards. The study provides more detailed and accurate measurements than past reports of facial soft tissue thickness, and reveals the uniqueness of the Japanese child's facial profile.

  9. When the bell tolls on Bell's palsy: finding occult malignancy in acute-onset facial paralysis.

    PubMed

    Quesnel, Alicia M; Lindsay, Robin W; Hadlock, Tessa A

    2010-01-01

    This study reports 4 cases of occult parotid malignancy presenting with sudden-onset facial paralysis to demonstrate that failure to regain tone 6 months after onset distinguishes these patients from Bell's palsy patients with delayed recovery and to propose a diagnostic algorithm for this subset of patients. A case series of 4 patients with occult parotid malignancies presenting with acute-onset unilateral facial paralysis is reported. Initial imaging on all 4 patients did not demonstrate a parotid mass. Diagnostic delays ranged from 7 to 36 months from time of onset of facial paralysis to time of diagnosis of parotid malignancy. Additional physical examination findings, especially failure to regain tone, as well as properly protocolled radiologic studies reviewed with dedicated head and neck radiologists, were helpful in arriving at the diagnosis. An algorithm to minimize diagnostic delays in this subset of acute facial paralysis patients is presented. Careful attention to facial tone, in addition to movement, is important in the diagnostic evaluation of acute-onset facial paralysis. Copyright 2010 Elsevier Inc. All rights reserved.

  10. Morphological Integration of Soft-Tissue Facial Morphology in Down Syndrome and Siblings

    PubMed Central

    Starbuck, John; Reeves, Roger H.; Richtsmeier, Joan

    2011-01-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6–12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. PMID:21996933

  11. Morphological integration of soft-tissue facial morphology in Down Syndrome and siblings.

    PubMed

    Starbuck, John; Reeves, Roger H; Richtsmeier, Joan

    2011-12-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6-12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. 2011 Wiley Periodicals, Inc.

  12. Low-level image properties in facial expressions.

    PubMed

    Menzel, Claudia; Redies, Christoph; Hayn-Leichsenring, Gregor U

    2018-06-04

    We studied low-level image properties of face photographs and analyzed whether they change with different emotional expressions displayed by an individual. Differences in image properties were measured in three databases that depicted a total of 167 individuals. Face images were used either in their original form, cut to a standard format or superimposed with a mask. Image properties analyzed were: brightness, redness, yellowness, contrast, spectral slope, overall power and relative power in low, medium and high spatial frequencies. Results showed that image properties differed significantly between expressions within each individual image set. Further, specific facial expressions corresponded to patterns of image properties that were consistent across all three databases. In order to experimentally validate our findings, we equalized the luminance histograms and spectral slopes of three images from a given individual who showed two expressions. Participants were significantly slower in matching the expression in an equalized compared to an original image triad. Thus, existing differences in these image properties (i.e., spectral slope, brightness or contrast) facilitate emotion detection in particular sets of face images. Copyright © 2018. Published by Elsevier B.V.

  13. Processing of Fear and Anger Facial Expressions: The Role of Spatial Frequency

    PubMed Central

    Comfort, William E.; Wang, Meng; Benton, Christopher P.; Zana, Yossi

    2013-01-01

    Spatial frequency (SF) components encode a portion of the affective value expressed in face images. The aim of this study was to estimate the relative weight of specific frequency spectrum bandwidth on the discrimination of anger and fear facial expressions. The general paradigm was a classification of the expression of faces morphed at varying proportions between anger and fear images in which SF adaptation and SF subtraction are expected to shift classification of facial emotion. A series of three experiments was conducted. In Experiment 1 subjects classified morphed face images that were unfiltered or filtered to remove either low (<8 cycles/face), middle (12–28 cycles/face), or high (>32 cycles/face) SF components. In Experiment 2 subjects were adapted to unfiltered or filtered prototypical (non-morphed) fear face images and subsequently classified morphed face images. In Experiment 3 subjects were adapted to unfiltered or filtered prototypical fear face images with the phase component randomized before classifying morphed face images. Removing mid frequency components from the target images shifted classification toward fear. The same shift was observed under adaptation condition to unfiltered and low- and middle-range filtered fear images. However, when the phase spectrum of the same adaptation stimuli was randomized, no adaptation effect was observed. These results suggest that medium SF components support the perception of fear more than anger at both low and high level of processing. They also suggest that the effect at high-level processing stage is related more to high-level featural and/or configural information than to the low-level frequency spectrum. PMID:23637687

  14. An optimized ERP brain-computer interface based on facial expression changes.

    PubMed

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  15. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  16. The effect of width of facial canal in patients with idiopathic peripheral facial paralysis on the development of paralysis.

    PubMed

    Eksi, Guldem; Akbay, Ercan; Bayarogullari, Hanifi; Cevik, Cengiz; Yengil, Erhan; Ozler, Gul Soylu

    2015-09-01

    The aim of this prospective study is to investigate whether the possible stenosis due to anatomic variations of labyrinthine segment (LS), tympanic segment (TS) and mastoid segment (MS) of the facial canal in the temporal bone is a predisposing factor in the development of paralysis. 22 patients with idiopathic peripheral facial paralysis (IPFP) were included in the study. Multi-slice computed tomography (MSCT) with 64 detectors was used for temporal bone imaging of the patients. Reconstruction images in axial, coronal and sagittal planes were created in workstation computers from the captured images. The diameters and lengths of LS, TS and MS of the facial canal were measured. The mean values of LD, ND and SL of LS were 1.31 ± 0.39, 0.91 ± 0.27, 4.17 ± 0.48 in patient group and 1.26 ± 0.29, 0.95 ± 0.21, 4.60 ± 1.36 in control group, respectively. The mean values of LD, ND and SL of TS were 1.11 ± 0.22, 0.90 ± 0.14, 12.63 ± 1.47 in patient group and 1.17 ± 0.23, 0.85 ± 0.24, 12.10 ± 1.79 in control group, respectively. The mean values of LD, ND and SL of MS were 1.80 ± 0.30, 1.44 ± 0.29 vs. 14.3 ± 1.90 in patient group 1.74 ± 0.38, 1.40 ± 0.29, 14.15 ± 2.16 in control group, respectively. The measurements of the parameters of all three segments in patient group and control group were similar. Similar results between patient and control group were obtained in this study investigating the effect of stenosis in facial canal in the development of IPFP.

  17. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    PubMed

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  18. Heritabilities of Facial Measurements and Their Latent Factors in Korean Families

    PubMed Central

    Kim, Hyun-Jin; Im, Sun-Wha; Jargal, Ganchimeg; Lee, Siwoo; Yi, Jae-Hyuk; Park, Jeong-Yeon; Sung, Joohon; Cho, Sung-Il; Kim, Jong-Yeol; Kim, Jong-Il; Seo, Jeong-Sun

    2013-01-01

    Genetic studies on facial morphology targeting healthy populations are fundamental in understanding the specific genetic influences involved; yet, most studies to date, if not all, have been focused on congenital diseases accompanied by facial anomalies. To study the specific genetic cues determining facial morphology, we estimated familial correlations and heritabilities of 14 facial measurements and 3 latent factors inferred from a factor analysis in a subset of the Korean population. The study included a total of 229 individuals from 38 families. We evaluated a total of 14 facial measurements using 2D digital photographs. We performed factor analysis to infer common latent variables. The heritabilities of 13 facial measurements were statistically significant (p < 0.05) and ranged from 0.25 to 0.61. Of these, the heritability of intercanthal width in the orbital region was found to be the highest (h2 = 0.61, SE = 0.14). Three factors (lower face portion, orbital region, and vertical length) were obtained through factor analysis, where the heritability values ranged from 0.45 to 0.55. The heritability values for each factor were higher than the mean heritability value of individual original measurements. We have confirmed the genetic influence on facial anthropometric traits and suggest a potential way to categorize and analyze the facial portions into different groups. PMID:23843774

  19. Face Aging Effect Simulation Using Hidden Factor Analysis Joint Sparse Representation.

    PubMed

    Yang, Hongyu; Huang, Di; Wang, Yunhong; Wang, Heng; Tang, Yuanyan

    2016-06-01

    Face aging simulation has received rising investigations nowadays, whereas it still remains a challenge to generate convincing and natural age-progressed face images. In this paper, we present a novel approach to such an issue using hidden factor analysis joint sparse representation. In contrast to the majority of tasks in the literature that integrally handle the facial texture, the proposed aging approach separately models the person-specific facial properties that tend to be stable in a relatively long period and the age-specific clues that gradually change over time. It then transforms the age component to a target age group via sparse reconstruction, yielding aging effects, which is finally combined with the identity component to achieve the aged face. Experiments are carried out on three face aging databases, and the results achieved clearly demonstrate the effectiveness and robustness of the proposed method in rendering a face with aging effects. In addition, a series of evaluations prove its validity with respect to identity preservation and aging effect generation.

  20. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

Top